WorldWideScience

Sample records for modeling quantitative experiments

  1. Quantitative modeling and data analysis of SELEX experiments

    Science.gov (United States)

    Djordjevic, Marko; Sengupta, Anirvan M.

    2006-03-01

    SELEX (systematic evolution of ligands by exponential enrichment) is an experimental procedure that allows the extraction, from an initially random pool of DNA, of those oligomers with high affinity for a given DNA-binding protein. We address what is a suitable experimental and computational procedure to infer parameters of transcription factor-DNA interaction from SELEX experiments. To answer this, we use a biophysical model of transcription factor-DNA interactions to quantitatively model SELEX. We show that a standard procedure is unsuitable for obtaining accurate interaction parameters. However, we theoretically show that a modified experiment in which chemical potential is fixed through different rounds of the experiment allows robust generation of an appropriate dataset. Based on our quantitative model, we propose a novel bioinformatic method of data analysis for such a modified experiment and apply it to extract the interaction parameters for a mammalian transcription factor CTF/NFI. From a practical point of view, our method results in a significantly improved false positive/false negative trade-off, as compared to both the standard information theory based method and a widely used empirically formulated procedure.

  2. Mechanics of neutrophil phagocytosis: experiments and quantitative models.

    Science.gov (United States)

    Herant, Marc; Heinrich, Volkmar; Dembo, Micah

    2006-05-01

    To quantitatively characterize the mechanical processes that drive phagocytosis, we observed the FcgammaR-driven engulfment of antibody-coated beads of diameters 3 mum to 11 mum by initially spherical neutrophils. In particular, the time course of cell morphology, of bead motion and of cortical tension were determined. Here, we introduce a number of mechanistic models for phagocytosis and test their validity by comparing the experimental data with finite element computations for multiple bead sizes. We find that the optimal models involve two key mechanical interactions: a repulsion or pressure between cytoskeleton and free membrane that drives protrusion, and an attraction between cytoskeleton and membrane newly adherent to the bead that flattens the cell into a thin lamella. Other models such as cytoskeletal expansion or swelling appear to be ruled out as main drivers of phagocytosis because of the characteristics of bead motion during engulfment. We finally show that the protrusive force necessary for the engulfment of large beads points towards storage of strain energy in the cytoskeleton over a large distance from the leading edge ( approximately 0.5 microm), and that the flattening force can plausibly be generated by the known concentrations of unconventional myosins at the leading edge.

  3. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    Science.gov (United States)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  4. Quantitative Modeling of Entangled Polymer Rheology: Experiments, Tube Models and Slip-Link Simulations

    Science.gov (United States)

    Desai, Priyanka Subhash

    Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd

  5. Experiment selection for the discrimination of semi-quantitative models of dynamical systems

    NARCIS (Netherlands)

    Vatcheva, [No Value; de Jong, H; Bernard, O; Mars, NJI

    2006-01-01

    Modeling an experimental system often results in a number of alternative models that are all justified by the available experimental data. To discriminate among these models, additional experiments are needed. Existing methods for the selection of discriminatory experiments in statistics and in arti

  6. Quantitative Simulation of Granular Collapse Experiments with Visco-Plastic Models

    Science.gov (United States)

    Mangeney, A.; Ionescu, I. R.; Bouchut, F.; Roche, O.

    2014-12-01

    One of the key issues in landslide modeling is to define the appropriate rheological behavior of these natural granular flows. In particular the description of the static and of the flowing states of granular media is still an open issue. This plays a crucial role in erosion/deposition processes. A first step to address this issue is to derive models able to reproduce laboratory experiments of granular flows. We propose here a mechanical and numerical model of dry granular flows that quantitatively well reproduces granular column collapse over inclined planes, with rheological parameters directly derived from the laboratory experiments. We reformulate the so-called μ(I) rheology proposed by Jop et al. (2006) where I is the so-called inertial number in the framework of Drucker-Prager plasticity with yield stress and a viscosity η(||D||, p) depending on both the pressure p and the norm of the strain rate tensor ||D||. The resulting dynamic viscosity varies from very small values near the free surface and near the front to 1.5 Pa.s within the quasi-static zone. We show that taking into account a constant mean viscosity during the flow (η = 1 Pa.s here) provides results very similar to those obtained with the variable viscosity deduced from the μ(I) rheology, while significantly reducing the computational cost. This has important implication for application to real landslides and rock avalanches. The numerical results show that the flow is essentially located in a surface layer behind the front, while the whole granular material is flowing near the front where basal sliding occurs. The static/flowing interface changes as a function of space and time, in good agreement with experimental observations. Heterogeneities are observed within the flow with low and high pressure zones, localized small upward velocity zones and vortices near the transition between the flowing and static grains. These instabilities create 'sucking zones' and have some characteristics similar

  7. Reconciling cyanobacterial fixed-nitrogen distributions and transport experiments with quantitative modelling

    CERN Document Server

    Brown, Aidan I

    2011-01-01

    Filamentous cyanobacteria growing in media with insufficient fixed nitrogen differentiate some cells into heterocysts, which fix nitrogen for the remaining vegetative cells. Transport studies have shown both periplasmic and cytoplasmic connections between cells that could transport fixed-nitrogen along the filament. Two experiments have imaged fixed-nitrogen distributions along filaments. In 1974,Wolk et al found a peaked concentration of fixed-nitrogen at heterocysts using autoradiographic techniques. In contrast, in 2007, Popa et al used nanoSIMS to show large dips at the location of heterocysts, with a variable but approximately level distribution between them. With an integrated model of fixed-nitrogen transport and cell growth, we recover the results of both Wolk et al and of Popa et al using the same model parameters. To do this, we account for immobile incorporated fixed-nitrogen and for the differing durations of labeled nitrogen fixation that occurred in the two experiments. The variations seen by Po...

  8. Modeling sequence-specific polymers using anisotropic coarse-grained sites allows quantitative comparison with experiment

    CERN Document Server

    Haxton, Thomas K; Zuckermann, Ronald N; Whitelam, Stephen

    2014-01-01

    Certain sequences of peptoid polymers (synthetic analogs of peptides) assemble into bilayer nanosheets via a nonequilibrium assembly pathway of adsorption, compression, and collapse at an air-water interface. As with other large-scale dynamic processes in biology and materials science, understanding the details of this supramolecular assembly process requires a modeling approach that captures behavior on a wide range of length and time scales, from those on which individual sidechains fluctuate to those on which assemblies of polymers evolve. Here we demonstrate that a new coarse-grained modeling approach is accurate and computationally efficient enough to do so. Our approach uses only a minimal number of coarse-grained sites, but retains independently fluctuating orientational degrees of freedom for each site. These orientational degrees of freedom allow us to accurately parameterize both bonded and nonbonded interactions, and to generate all-atom configurations with sufficient accuracy to perform atomic sca...

  9. Quantitative strain analysis in analogue modelling experiments: insights from X-ray computed tomography and tomographic image correlation

    Science.gov (United States)

    Adam, J.; Klinkmueller, M.; Schreurs, G.; Wieneke, B.

    2009-04-01

    The combination of scaled analogue modelling experiments, advanced research in analogue material mechanics (Lohrmann et al. 2003, Panien et al. 2006), X-ray computed tomography and new high-resolution deformation monitoring techniques (2D/3D Digital Image Correlation) is a new powerful tool not only to examine the evolution and interaction of faulting in analogue models, but also to evaluate relevant controlling factors such as mechanics, sedimentation, erosion and climate. This is of particular interest for applied problems in the energy sector (e.g., structurally complex reservoirs, LG & CO2 underground storage) because the results are essential for geological and seismic interpretation as well as for more realistically constrained fault/fracture simulations and reservoir characterisation. X-ray computed tomography (CT) analysis has been successfully applied to analogue models since the late 1980s. This technique permits visualisation of the interior of an analogue model without destroying it. Technological improvements have resulted in more powerful X-ray CT scanners that allow periodic acquisition of volumetric data sets thus making it possible to follow the 3-D evolution of the model structures with time (e.g. Schreurs et al., 2002, 2003). Optical strain monitoring (Digital Image Correlation, DIC) in analogue experiments (Adam et al., 2005) represents an important advance in quantitative physical modelling and in helping to understand non-linear rock deformation processes. Optical non-intrusive 2D/3D strain and surface flow analysis by DIC is a new methodology in physical modelling that enables the complete quantification of localised and distributed model deformation. The increase in spatial/temporal strain data resolution of several orders of magnitude makes physical modelling - used for decades to visualize the kinematic processes of geological deformation processes - a unique research tool to determine what fundamental physical processes control tectonic

  10. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Directory of Open Access Journals (Sweden)

    Ikkyu Aihara

    Full Text Available Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon pursued a moving moth (Goniocraspidum pryeri in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  11. Quantitative Analogue Experimental Sequence Stratigraphy : Modelling landscape evolution and sequence stratigraphy of river-shelf sedimentary systems by quantitative analogue experiments

    NARCIS (Netherlands)

    Heijst, Maximiliaan Wilhelmus Ignatius Maria van

    2003-01-01

    This thesis reports a series of flume tank experiments that were conducted to model the stratigraphic evolution of river-delta systems. Chapter 1 introduces the river-delta sedimentary system that is subject of modelling. The chapter also includes an overview of previous research and the summary and

  12. Influence factors and prediction of stormwater runoff of urban green space in Tianjin, China: laboratory experiment and quantitative theory model.

    Science.gov (United States)

    Yang, Xu; You, Xue-Yi; Ji, Min; Nima, Ciren

    2013-01-01

    The effects of limiting factors such as rainfall intensity, rainfall duration, grass type and vegetation coverage on the stormwater runoff of urban green space was investigated in Tianjin. The prediction equation of stormwater runoff was established by the quantitative theory with the lab experimental data of soil columns. It was validated by three field experiments and the relative errors between predicted and measured stormwater runoff are 1.41, 1.52 and 7.35%, respectively. The results implied that the prediction equation could be used to forecast the stormwater runoff of urban green space. The results of range and variance analysis indicated the sequence order of limiting factors is rainfall intensity > grass type > rainfall duration > vegetation coverage. The least runoff of green land in the present study is the combination of rainfall intensity 60.0 mm/h, duration 60.0 min, grass Festuca arundinacea and vegetation coverage 90.0%. When the intensity and duration of rainfall are 60.0 mm/h and 90.0 min, the predicted volumetric runoff coefficient is 0.23 with Festuca arundinacea of 90.0% vegetation coverage. The present approach indicated that green space is an effective method to reduce stormwater runoff and the conclusions are mainly applicable to Tianjin and the semi-arid areas with main summer precipitation and long-time interval rainfalls.

  13. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  14. Teaching Integrative Physiology Using the Quantitative Circulatory Physiology Model and Case Discussion Method: Evaluation of the Learning Experience

    Science.gov (United States)

    Rodriguez-Barbero, A.; Lopez-Novoa, J. M.

    2008-01-01

    One of the problems that we have found when teaching human physiology in a Spanish medical school is that the degree of understanding by the students of the integration between organs and systems is rather poor. We attempted to remedy this problem by using a case discussion method together with the Quantitative Circulatory Physiology (QCP)…

  15. Universal properties of high-temperature superconductors from real-space pairing: t -J -U model and its quantitative comparison with experiment

    Science.gov (United States)

    Spałek, Józef; Zegrodnik, Michał; Kaczmarczyk, Jan

    2017-01-01

    Selected universal experimental properties of high-temperature superconducting (HTS) cuprates have been singled out in the last decade. One of the pivotal challenges in this field is the designation of a consistent interpretation framework within which we can describe quantitatively the universal features of those systems. Here we analyze in a detailed manner the principal experimental data and compare them quantitatively with the approach based on a single-band model of strongly correlated electrons supplemented with strong antiferromagnetic (super)exchange interaction (the so-called t -J -U model). The model rationale is provided by estimating its microscopic parameters on the basis of the three-band approach for the Cu-O plane. We use our original full Gutzwiller wave-function solution by going beyond the renormalized mean-field theory (RMFT) in a systematic manner. Our approach reproduces very well the observed hole doping (δ ) dependence of the kinetic-energy gain in the superconducting phase, one of the principal non-Bardeen-Cooper-Schrieffer features of the cuprates. The calculated Fermi velocity in the nodal direction is practically δ -independent and its universal value agrees very well with that determined experimentally. Also, a weak doping dependence of the Fermi wave vector leads to an almost constant value of the effective mass in a pure superconducting phase which is both observed in experiment and reproduced within our approach. An assessment of the currently used models (t -J , Hubbard) is carried out and the results of the canonical RMFT as a zeroth-order solution are provided for comparison to illustrate the necessity of the introduced higher-order contributions.

  16. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  17. Quantitative two-dimensional measurement of oil-film thickness by laser-induced fluorescence in a piston-ring model experiment.

    Science.gov (United States)

    Wigger, Stefan; Füßer, Hans-Jürgen; Fuhrmann, Daniel; Schulz, Christof; Kaiser, Sebastian A

    2016-01-10

    This paper describes advances in using laser-induced fluorescence of dyes for imaging the thickness of oil films in a rotating ring tribometer with optical access, an experiment representing a sliding piston ring in an internal combustion engine. A method for quantitative imaging of the oil-film thickness is developed that overcomes the main challenge, the accurate calibration of the detected fluorescence signal for film thicknesses in the micrometer range. The influence of the background material and its surface roughness is examined, and a method for flat-field correction is introduced. Experiments in the tribometer show that the method yields quantitative, physically plausible results, visualizing features with submicrometer thickness.

  18. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  19. Building a Database for a Quantitative Model

    Science.gov (United States)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  20. The mathematics of cancer: integrating quantitative models.

    Science.gov (United States)

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.

  1. Quantitative structure - mesothelioma potency model ...

    Science.gov (United States)

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  2. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    on the existence of a quotient construction, allowing a property phi of a parallel system phi/A to be transformed into a sufficient and necessary quotient-property yolA to be satisfied by the component 13. Given a model checking problem involving a network Pi I and a property yo, the method gradually move (by...

  3. Design and optimization of reverse-transcription quantitative PCR experiments.

    Science.gov (United States)

    Tichopad, Ales; Kitchen, Rob; Riedmaier, Irmgard; Becker, Christiane; Ståhlberg, Anders; Kubista, Mikael

    2009-10-01

    Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. .

  4. Model Experiments and Model Descriptions

    Science.gov (United States)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  5. Statistical design of quantitative mass spectrometry-based proteomic experiments.

    Science.gov (United States)

    Oberg, Ann L; Vitek, Olga

    2009-05-01

    We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.

  6. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois;

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  7. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  8. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...... by the environment in which they are embedded. This thesis studies the semantics and properties of a model-based framework for re- active systems, in which models and specifications are assumed to contain quantifiable information, such as references to time or energy. Our goal is to develop a theory of approximation......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...

  9. Mini-Column Ion-Exchange Separation and Atomic Absorption Quantitation of Nickel, Cobalt, and Iron: An Undergraduate Quantitative Analysis Experiment.

    Science.gov (United States)

    Anderson, James L.; And Others

    1980-01-01

    Presents an undergraduate quantitative analysis experiment, describing an atomic absorption quantitation scheme that is fast, sensitive and comparatively simple relative to other titration experiments. (CS)

  10. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided......, allowing verification procedures to quantify judgements, on how suitable a model is for a given specification — hence mitigating the usual harsh distinction between satisfactory and non-satisfactory system designs. This information, among other things, allows us to evaluate the robustness of our framework......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...

  11. Quantitative bioluminescence imaging of mouse tumor models.

    Science.gov (United States)

    Tseng, Jen-Chieh; Kung, Andrew L

    2015-01-05

    Bioluminescence imaging (BLI) has become an essential technique for preclinical evaluation of anticancer therapeutics and provides sensitive and quantitative measurements of tumor burden in experimental cancer models. For light generation, a vector encoding firefly luciferase is introduced into human cancer cells that are grown as tumor xenografts in immunocompromised hosts, and the enzyme substrate luciferin is injected into the host. Alternatively, the reporter gene can be expressed in genetically engineered mouse models to determine the onset and progression of disease. In addition to expression of an ectopic luciferase enzyme, bioluminescence requires oxygen and ATP, thus only viable luciferase-expressing cells or tissues are capable of producing bioluminescence signals. Here, we summarize a BLI protocol that takes advantage of advances in hardware, especially the cooled charge-coupled device camera, to enable detection of bioluminescence in living animals with high sensitivity and a large dynamic range.

  12. Quantitative assessment model for gastric cancer screening

    Institute of Scientific and Technical Information of China (English)

    Kun Chen; Wei-Ping Yu; Liang Song; Yi-Min Zhu

    2005-01-01

    AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer.METHODS: A case control study was carried on in 66patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food,etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD).RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively.According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%.Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P>0.05).CONCLUSION: The validity of this method is satisfactory.It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer.

  13. Global Quantitative Modeling of Chromatin Factor Interactions

    Science.gov (United States)

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  14. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  15. 定量化模型在金融工程实验课中的应用研究%Application of Quantitative Model in Financial Engineering Experiment Course

    Institute of Scientific and Technical Information of China (English)

    李胜歌

    2012-01-01

    Financial engineering is an emerging discipline of mathematical tools to create a model of financial markets and resolve the financial problems,has a strong and practical teaching must improve experimental teaching system,strengthen the quantitative model experiment course in financial engineeringin applied research,and enhance students 'ability and sense of innovation and creative ability,to enhance the students' creative ability to solve the financial problems.%金融工程是一门使用数学工具来建立金融市场模型和解决金融问题的新兴学科,具有很强的应用性与实践性,因此在教学中必须完善实验教学体系,加强定量化模型在金融工程实验课中的应用研究,增强学生动手能力、创新意识和创新能力的培养,提升学生创造性解决金融问题的能力。

  16. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  17. Computationally driven, quantitative experiments discover genes required for mitochondrial biogenesis.

    Directory of Open Access Journals (Sweden)

    David C Hess

    2009-03-01

    Full Text Available Mitochondria are central to many cellular processes including respiration, ion homeostasis, and apoptosis. Using computational predictions combined with traditional quantitative experiments, we have identified 100 proteins whose deficiency alters mitochondrial biogenesis and inheritance in Saccharomyces cerevisiae. In addition, we used computational predictions to perform targeted double-mutant analysis detecting another nine genes with synthetic defects in mitochondrial biogenesis. This represents an increase of about 25% over previously known participants. Nearly half of these newly characterized proteins are conserved in mammals, including several orthologs known to be involved in human disease. Mutations in many of these genes demonstrate statistically significant mitochondrial transmission phenotypes more subtle than could be detected by traditional genetic screens or high-throughput techniques, and 47 have not been previously localized to mitochondria. We further characterized a subset of these genes using growth profiling and dual immunofluorescence, which identified genes specifically required for aerobic respiration and an uncharacterized cytoplasmic protein required for normal mitochondrial motility. Our results demonstrate that by leveraging computational analysis to direct quantitative experimental assays, we have characterized mutants with subtle mitochondrial defects whose phenotypes were undetected by high-throughput methods.

  18. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    Science.gov (United States)

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  19. Physics of Hard Spheres Experiment: Significant and Quantitative Findings Made

    Science.gov (United States)

    Doherty, Michael P.

    2000-01-01

    Direct examination of atomic interactions is difficult. One powerful approach to visualizing atomic interactions is to study near-index-matched colloidal dispersions of microscopic plastic spheres, which can be probed by visible light. Such spheres interact through hydrodynamic and Brownian forces, but they feel no direct force before an infinite repulsion at contact. Through the microgravity flight of the Physics of Hard Spheres Experiment (PHaSE), researchers have sought a more complete understanding of the entropically driven disorder-order transition in hard-sphere colloidal dispersions. The experiment was conceived by Professors Paul M. Chaikin and William B. Russel of Princeton University. Microgravity was required because, on Earth, index-matched colloidal dispersions often cannot be density matched, resulting in significant settling over the crystallization period. This settling makes them a poor model of the equilibrium atomic system, where the effect of gravity is truly negligible. For this purpose, a customized light-scattering instrument was designed, built, and flown by the NASA Glenn Research Center at Lewis Field on the space shuttle (shuttle missions STS 83 and STS 94). This instrument performed both static and dynamic light scattering, with sample oscillation for determining rheological properties. Scattered light from a 532- nm laser was recorded either by a 10-bit charge-coupled discharge (CCD) camera from a concentric screen covering angles of 0 to 60 or by sensitive avalanche photodiode detectors, which convert the photons into binary data from which two correlators compute autocorrelation functions. The sample cell was driven by a direct-current servomotor to allow sinusoidal oscillation for the measurement of rheological properties. Significant microgravity research findings include the observation of beautiful dendritic crystals, the crystallization of a "glassy phase" sample in microgravity that did not crystallize for over 1 year in 1g

  20. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  1. Power analysis of artificial selection experiments using efficient whole genome simulation of quantitative traits.

    Science.gov (United States)

    Kessner, Darren; Novembre, John

    2015-04-01

    Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50-100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates.

  2. Toward quantitative modeling of silicon phononic thermocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  3. The life review experience: Qualitative and quantitative characteristics.

    Science.gov (United States)

    Katz, Judith; Saadon-Grosman, Noam; Arzy, Shahar

    2017-02-01

    The life-review experience (LRE) is a most intriguing mental phenomenon that fascinated humans from time immemorial. In LRE one sees vividly a succession of one's own life-events. While reports of LRE are abundant in the medical, psychological and popular literature, not much is known about LRE's cognitive and psychological basis. Moreover, while LRE is known as part of the phenomenology of near-death experience, its manifestation in the general population and in other circumstances is still to be investigated. In a first step we studied the phenomenology of LRE by means of in-depth qualitative interview of 7 people who underwent full LRE. In a second step we extracted the main characters of LRE, to develop a questionnaire and an LRE-score that best reflects LRE phenomenology. This questionnaire was then run on 264 participants of diverse ages and backgrounds, and the resulted score was further subjected to statistical analyses. Qualitative analysis showed the LRE to manifest several subtypes of characteristics in terms of order, continuity, the covered period, extension to the future, valence, emotions, and perspective taking. Quantitative results in the normal population showed normal distribution of the LRE-score over participants. Re-experiencing one's own life-events, so-called LRE, is a phenomenon with well-defined characteristics, and its subcomponents may be also evident in healthy people. This suggests that a representation of life-events as a continuum exists in the cognitive system, and maybe further expressed in extreme conditions of psychological and physiological stress. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Adaptive microfluidic gradient generator for quantitative chemotaxis experiments

    Science.gov (United States)

    Anielski, Alexander; Pfannes, Eva K. B.; Beta, Carsten

    2017-03-01

    Chemotactic motion in a chemical gradient is an essential cellular function that controls many processes in the living world. For a better understanding and more detailed modelling of the underlying mechanisms of chemotaxis, quantitative investigations in controlled environments are needed. We developed a setup that allows us to separately address the dependencies of the chemotactic motion on the average background concentration and on the gradient steepness of the chemoattractant. In particular, both the background concentration and the gradient steepness can be kept constant at the position of the cell while it moves along in the gradient direction. This is achieved by generating a well-defined chemoattractant gradient using flow photolysis. In this approach, the chemoattractant is released by a light-induced reaction from a caged precursor in a microfluidic flow chamber upstream of the cell. The flow photolysis approach is combined with an automated real-time cell tracker that determines changes in the cell position and triggers movement of the microscope stage such that the cell motion is compensated and the cell remains at the same position in the gradient profile. The gradient profile can be either determined experimentally using a caged fluorescent dye or may be alternatively determined by numerical solutions of the corresponding physical model. To demonstrate the function of this adaptive microfluidic gradient generator, we compare the chemotactic motion of Dictyostelium discoideum cells in a static gradient and in a gradient that adapts to the position of the moving cell.

  5. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  6. Qualitative vs. quantitative software process simulation modelling: conversion and comparison

    OpenAIRE

    Zhang, He; Kitchenham, Barbara; Jeffery, Ross

    2009-01-01

    peer-reviewed Software Process Simulation Modeling (SPSM) research has increased in the past two decades. However, most of these models are quantitative, which require detailed understanding and accurate measurement. As the continuous work to our previous studies in qualitative modeling of software process, this paper aims to investigate the structure equivalence and model conversion between quantitative and qualitative process modeling, and to compare the characteristics and performance o...

  7. Quantitative modelling of the biomechanics of the avian syrinx

    NARCIS (Netherlands)

    Elemans, C.P.H.; Larsen, O.N.; Hoffmann, M.R.; Leeuwen, van J.L.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts

  8. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  9. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  10. Quantitative numerical analysis of transient IR-experiments on buildings

    Science.gov (United States)

    Maierhofer, Ch.; Wiggenhauser, H.; Brink, A.; Röllig, M.

    2004-12-01

    Impulse-thermography has been established as a fast and reliable tool in many areas of non-destructive testing. In recent years several investigations have been done to apply active thermography to civil engineering. For quantitative investigations in this area of application, finite difference calculations have been performed for systematic studies on the influence of environmental conditions, heating power and time, defect depth and size and thermal properties of the bulk material (concrete). The comparison of simulated and experimental data enables the quantitative analysis of defects.

  11. Quantitative Experiments to Explain the Change of Seasons

    Science.gov (United States)

    Testa, Italo; Busarello, Gianni; Puddu, Emanuella; Leccia, Silvio; Merluzzi, Paola; Colantonio, Arturo; Moretti, Maria Ida; Galano, Silvia; Zappia, Alessandro

    2015-01-01

    The science education literature shows that students have difficulty understanding what causes the seasons. Incorrect explanations are often due to a lack of knowledge about the physical mechanisms underlying this phenomenon. To address this, we present a module in which the students engage in quantitative measurements with a photovoltaic panel to…

  12. Modelling Urban Experiences

    DEFF Research Database (Denmark)

    Jantzen, Christian; Vetner, Mikael

    2008-01-01

    How can urban designers develop an emotionally satisfying environment not only for today's users but also for coming generations? Which devices can they use to elicit interesting and relevant urban experiences? This paper attempts to answer these questions by analyzing the design of Zuidas, a new...

  13. Extension of nano-confined DNA: quantitative comparison between experiment and theory

    CERN Document Server

    Iarko, V; Nyberg, L K; Müller, V; Fritzsche, J; Ambjörnsson, T; Beech, J P; Tegenfeldt, J O; Mehlig, K; Westerlund, F; Mehlig, B

    2015-01-01

    The extension of DNA confined to nanochannels has been studied intensively and in detail. Yet quantitative comparisons between experiments and model calculations are difficult because most theoretical predictions involve undetermined prefactors, and because the model parameters (contour length, Kuhn length, and effective width) are difficult to compute reliably, leading to a substantial uncertainties. Here we use a recent asymptotically exact theory for the DNA extension in the "extended de Gennes regime" that allows to determine the model parameters by comparing experimental results with theory. We obtained new experimental results for this purpose, for the mean DNA extension and its standard deviation, varying the channel geometry, dye intercalation ratio, and ionic buffer strength. The experimental results agree very well with theory at high ionic strengths, indicating that the model parameters are reliable. At low ionic strengths the agreement is less good. We discuss possible reasons. Our approach allows...

  14. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  15. Undergraduate surgical nursing preparation and guided operating room experience: A quantitative analysis.

    Science.gov (United States)

    Foran, Paula

    2016-01-01

    The aim of this research was to determine if guided operating theatre experience in the undergraduate nursing curricula enhanced surgical knowledge and understanding of nursing care provided outside this specialist area in the pre- and post-operative surgical wards. Using quantitative analyses, undergraduate nurses were knowledge tested on areas of pre- and post-operative surgical nursing in their final semester of study. As much learning occurs in nurses' first year of practice, participants were re-tested again after their Graduate Nurse Program/Preceptorship year. Participants' results were compared to the model of operating room education they had participated in to determine if there was a relationship between the type of theatre education they experienced (if any) and their knowledge of surgical ward nursing. Findings revealed undergraduates nurses receiving guided operating theatre experience had a 76% pass rate compared to 56% with non-guided or no experience (p < 0.001). Graduates with guided operating theatre experience as undergraduates or graduate nurses achieved a 100% pass rate compared to 53% with non-guided or no experience (p < 0.001). The research informs us that undergraduate nurses achieve greater learning about surgical ward nursing via guided operating room experience as opposed to surgical ward nursing experience alone.

  16. The Design of a Quantitative Western Blot Experiment

    Directory of Open Access Journals (Sweden)

    Sean C. Taylor

    2014-01-01

    Full Text Available Western blotting is a technique that has been in practice for more than three decades that began as a means of detecting a protein target in a complex sample. Although there have been significant advances in both the imaging and reagent technologies to improve sensitivity, dynamic range of detection, and the applicability of multiplexed target detection, the basic technique has remained essentially unchanged. In the past, western blotting was used simply to detect a specific target protein in a complex mixture, but now journal editors and reviewers are requesting the quantitative interpretation of western blot data in terms of fold changes in protein expression between samples. The calculations are based on the differential densitometry of the associated chemiluminescent and/or fluorescent signals from the blots and this now requires a fundamental shift in the experimental methodology, acquisition, and interpretation of the data. We have recently published an updated approach to produce quantitative densitometric data from western blots (Taylor et al., 2013 and here we summarize the complete western blot workflow with a focus on sample preparation and data analysis for quantitative western blotting.

  17. The design of a quantitative western blot experiment.

    Science.gov (United States)

    Taylor, Sean C; Posch, Anton

    2014-01-01

    Western blotting is a technique that has been in practice for more than three decades that began as a means of detecting a protein target in a complex sample. Although there have been significant advances in both the imaging and reagent technologies to improve sensitivity, dynamic range of detection, and the applicability of multiplexed target detection, the basic technique has remained essentially unchanged. In the past, western blotting was used simply to detect a specific target protein in a complex mixture, but now journal editors and reviewers are requesting the quantitative interpretation of western blot data in terms of fold changes in protein expression between samples. The calculations are based on the differential densitometry of the associated chemiluminescent and/or fluorescent signals from the blots and this now requires a fundamental shift in the experimental methodology, acquisition, and interpretation of the data. We have recently published an updated approach to produce quantitative densitometric data from western blots (Taylor et al., 2013) and here we summarize the complete western blot workflow with a focus on sample preparation and data analysis for quantitative western blotting.

  18. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  19. Quantitative magnetospheric models: results and perspectives.

    Science.gov (United States)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  20. The probability of evolutionary rescue: towards a quantitative comparison between theory and evolution experiments.

    Science.gov (United States)

    Martin, Guillaume; Aguilée, Robin; Ramsayer, Johan; Kaltz, Oliver; Ronce, Ophélie

    2013-01-19

    Evolutionary rescue occurs when a population genetically adapts to a new stressful environment that would otherwise cause its extinction. Forecasting the probability of persistence under stress, including emergence of drug resistance as a special case of interest, requires experimentally validated quantitative predictions. Here, we propose general analytical predictions, based on diffusion approximations, for the probability of evolutionary rescue. We assume a narrow genetic basis for adaptation to stress, as is often the case for drug resistance. First, we extend the rescue model of Orr & Unckless (Am. Nat. 2008 172, 160-169) to a broader demographic and genetic context, allowing the model to apply to empirical systems with variation among mutation effects on demography, overlapping generations and bottlenecks, all common features of microbial populations. Second, we confront our predictions of rescue probability with two datasets from experiments with Saccharomyces cerevisiae (yeast) and Pseudomonas fluorescens (bacterium). The tests show the qualitative agreement between the model and observed patterns, and illustrate how biologically relevant quantities, such as the per capita rate of rescue, can be estimated from fits of empirical data. Finally, we use the results of the model to suggest further, more quantitative, tests of evolutionary rescue theory.

  1. Hazard Response Modeling Uncertainty (A Quantitative Method)

    Science.gov (United States)

    1988-10-01

    ersio 114-11aiaiI I I I II L I ATINI Iri Ig IN Ig - ISI I I s InWLS I I I II I I I IWILLa RguOSmI IT$ INDS In s list INDIN I Im Ad inla o o ILLS I...OesotASII II I I I" GASau ATI im IVS ES Igo Igo INC 9 -U TIg IN ImS. I IgoIDI II i t I I ol f i isI I I I I I * WOOL ETY tGMIM (SU I YESMI jWM# GUSA imp I...is the concentration predicted by some component or model.P The variance of C /C is calculated and defined as var(Model I), where Modelo p I could be

  2. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  3. A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.

    Science.gov (United States)

    Bruneton, Eric

    2016-10-27

    We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.

  4. An Inside View: The Utility of Quantitative Observation in Understanding College Educational Experiences

    Science.gov (United States)

    Campbell, Corbin M.

    2017-01-01

    This article describes quantitative observation as a method for understanding college educational experiences. Quantitative observation has been used widely in several fields and in K-12 education, but has had limited application to research in higher education and student affairs to date. The article describes the central tenets of quantitative…

  5. Quantitative comparisons of analogue models of brittle wedge dynamics

    Science.gov (United States)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  6. A Solved Model to Show Insufficiency of Quantitative Adiabatic Condition

    Institute of Scientific and Technical Information of China (English)

    LIU Long-Jiang; LIU Yu-Zhen; TONG Dian-Min

    2009-01-01

    The adiabatic theorem is a useful tool in processing quantum systems slowly evolving,but its practical application depends on the quantitative condition expressed by Hamiltonian's eigenvalues and eigenstates,which is usually taken as a sufficient condition.Recently,the sumciency of the condition was questioned,and several counterex amples have been reported.Here we present a new solved model to show the insufficiency of the traditional quantitative adiabatic condition.

  7. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  8. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  9. Modeling quantitative phase image formation under tilted illuminations.

    Science.gov (United States)

    Bon, Pierre; Wattellier, Benoit; Monneret, Serge

    2012-05-15

    A generalized product-of-convolution model for simulation of quantitative phase microscopy of thick heterogeneous specimen under tilted plane-wave illumination is presented. Actual simulations are checked against a much more time-consuming commercial finite-difference time-domain method. Then modeled data are compared with experimental measurements that were made with a quadriwave lateral shearing interferometer.

  10. How to teach friction: Experiments and models

    Science.gov (United States)

    Besson, Ugo; Borghi, Lidia; De Ambrosis, Anna; Mascheretti, Paolo

    2007-12-01

    Students generally have difficulty understanding friction and its associated phenomena. High school and introductory college-level physics courses usually do not give the topic the attention it deserves. We have designed a sequence for teaching about friction between solids based on a didactic reconstruction of the relevant physics, as well as research findings about student conceptions. The sequence begins with demonstrations that illustrate different types of friction. Experiments are subsequently performed to motivate students to obtain quantitative relations in the form of phenomenological laws. To help students understand the mechanisms producing friction, models illustrating the processes taking place on the surface of bodies in contact are proposed.

  11. Generalized PSF modeling for optimized quantitation in PET imaging

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman

    2017-06-01

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  12. Refining the quantitative pathway of the Pathways to Mathematics model.

    Science.gov (United States)

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Quantitative observation of light flash sensations experiment MA-106

    Science.gov (United States)

    Budinger, T. F.; Tobias, C. A.; Schopper, E.; Schott, J. U.; Huesman, R. H.; Upham, F. T.; Wieskamp, T. F.; Kucala, J. M.; Goulding, F. S.; Landis, D. A.

    1976-01-01

    Light flashes caused by the interaction of cosmic particles with the visual apparatus have been observed by astronauts on all space missions since Apollo 11. This Apollo Soyuz Test Project experiment compared measurements of the observer's visual sensitivity with measurements of the ambient radiation environment and with the frequency and character of the flashes observed. The data obtained reveal a latitude dependence of the frequency of observed flashes. This distribution of flashes is correlated with the distribution of cosmic particles with stopping power greater than 15 keV/ micrometers in the eye. The interaction of dark adaptation, specific ionization, and range of particles in the retina as factors in the visualization of particle passage is discussed.

  14. Quantitative analysis of experiments on bacterial chemotaxis to naphthalene.

    Science.gov (United States)

    Pedit, Joseph A; Marx, Randall B; Miller, Cass T; Aitken, Michael D

    2002-06-20

    A mathematical model was developed to quantify chemotaxis to naphthalene by Pseudomonas putida G7 (PpG7) and its influence on naphthalene degradation. The model was first used to estimate the three transport parameters (coefficients for naphthalene diffusion, random motility, and chemotactic sensitivity) by fitting it to experimental data on naphthalene removal from a discrete source in an aqueous system. The best-fit value of naphthalene diffusivity was close to the value estimated from molecular properties with the Wilke-Chang equation. Simulations applied to a non-chemotactic mutant strain only fit the experimental data well if random motility was negligible, suggesting that motility may be lost rapidly in the absence of substrate or that gravity may influence net random motion in a vertically oriented experimental system. For the chemotactic wild-type strain, random motility and gravity were predicted to have a negligible impact on naphthalene removal relative to the impact of chemotaxis. Based on simulations using the best-fit value of the chemotactic sensitivity coefficient, initial cell concentrations for a non-chemotactic strain would have to be several orders of magnitude higher than for a chemotactic strain to achieve similar rates of naphthalene removal under the experimental conditions we evaluated. The model was also applied to an experimental system representing an adaptation of the conventional capillary assay to evaluate chemotaxis in porous media. Our analysis suggests that it may be possible to quantify chemotaxis in porous media systems by simply adjusting the model's transport parameters to account for tortuosity, as has been suggested by others.

  15. PETN ignition experiments and models.

    Science.gov (United States)

    Hobbs, Michael L; Wente, William B; Kaneshige, Michael J

    2010-04-29

    Ignition experiments from various sources, including our own laboratory, have been used to develop a simple ignition model for pentaerythritol tetranitrate (PETN). The experiments consist of differential thermal analysis, thermogravimetric analysis, differential scanning calorimetry, beaker tests, one-dimensional time to explosion tests, Sandia's instrumented thermal ignition tests (SITI), and thermal ignition of nonelectrical detonators. The model developed using this data consists of a one-step, first-order, pressure-independent mechanism used to predict pressure, temperature, and time to ignition for various configurations. The model was used to assess the state of the degraded PETN at the onset of ignition. We propose that cookoff violence for PETN can be correlated with the extent of reaction at the onset of ignition. This hypothesis was tested by evaluating metal deformation produced from detonators encased in copper as well as comparing postignition photos of the SITI experiments.

  16. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  17. Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment

    Science.gov (United States)

    Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.

    1979-01-01

    The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.

  18. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage ti

  19. Quantitative modelling in design and operation of food supply systems

    NARCIS (Netherlands)

    Beek, van P.

    2004-01-01

    During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and opera

  20. A GPGPU accelerated modeling environment for quantitatively characterizing karst systems

    Science.gov (United States)

    Myre, J. M.; Covington, M. D.; Luhmann, A. J.; Saar, M. O.

    2011-12-01

    The ability to derive quantitative information on the geometry of karst aquifer systems is highly desirable. Knowing the geometric makeup of a karst aquifer system enables quantitative characterization of the systems response to hydraulic events. However, the relationship between flow path geometry and karst aquifer response is not well understood. One method to improve this understanding is the use of high speed modeling environments. High speed modeling environments offer great potential in this regard as they allow researchers to improve their understanding of the modeled karst aquifer through fast quantitative characterization. To that end, we have implemented a finite difference model using General Purpose Graphics Processing Units (GPGPUs). GPGPUs are special purpose accelerators which are capable of high speed and highly parallel computation. The GPGPU architecture is a grid like structure, making it is a natural fit for structured systems like finite difference models. To characterize the highly complex nature of karst aquifer systems our modeling environment is designed to use an inverse method to conduct the parameter tuning. Using an inverse method reduces the total amount of parameter space needed to produce a set of parameters describing a system of good fit. Systems of good fit are determined with a comparison to reference storm responses. To obtain reference storm responses we have collected data from a series of data-loggers measuring water depth, temperature, and conductivity at locations along a cave stream with a known geometry in southeastern Minnesota. By comparing the modeled response to those of the reference responses the model parameters can be tuned to quantitatively characterize geometry, and thus, the response of the karst system.

  1. Reservoir Stochastic Modeling Constrained by Quantitative Geological Conceptual Patterns

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper discusses the principles of geologic constraints on reservoir stochastic modeling. By using the system science theory, two kinds of uncertainties, including random uncertainty and fuzzy uncertainty, are recognized. In order to improve the precision of stochastic modeling and reduce the uncertainty in realization, the fuzzy uncertainty should be stressed, and the "geological genesis-controlled modeling" is conducted under the guidance of a quantitative geological pattern. An example of the Pingqiao horizontal-well division of the Ansai Oilfield in the Ordos Basin is taken to expound the method of stochastic modeling.

  2. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology

    Science.gov (United States)

    Bachmann, Julie; Matteson, Andrew; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D.; Theis, Fabian J.; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642

  3. Lessons learned from quantitative dynamical modeling in systems biology.

    Directory of Open Access Journals (Sweden)

    Andreas Raue

    Full Text Available Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here.

  4. Human judgment vs. quantitative models for the management of ecological resources.

    Science.gov (United States)

    Holden, Matthew H; Ellner, Stephen P

    2016-07-01

    Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed

  5. Quantitative Analysis of Polarimetric Model-Based Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2016-11-01

    Full Text Available In this paper, we analyze the robustness of the parameter inversion provided by general polarimetric model-based decomposition methods from the perspective of a quantitative application. The general model and algorithm we have studied is the method proposed recently by Chen et al., which makes use of the complete polarimetric information and outperforms traditional decomposition methods in terms of feature extraction from land covers. Nevertheless, a quantitative analysis on the retrieved parameters from that approach suggests that further investigations are required in order to fully confirm the links between a physically-based model (i.e., approaches derived from the Freeman–Durden concept and its outputs as intermediate products before any biophysical parameter retrieval is addressed. To this aim, we propose some modifications on the optimization algorithm employed for model inversion, including redefined boundary conditions, transformation of variables, and a different strategy for values initialization. A number of Monte Carlo simulation tests for typical scenarios are carried out and show that the parameter estimation accuracy of the proposed method is significantly increased with respect to the original implementation. Fully polarimetric airborne datasets at L-band acquired by German Aerospace Center’s (DLR’s experimental synthetic aperture radar (E-SAR system were also used for testing purposes. The results show different qualitative descriptions of the same cover from six different model-based methods. According to the Bragg coefficient ratio (i.e., β , they are prone to provide wrong numerical inversion results, which could prevent any subsequent quantitative characterization of specific areas in the scene. Besides the particular improvements proposed over an existing polarimetric inversion method, this paper is aimed at pointing out the necessity of checking quantitatively the accuracy of model-based PolSAR techniques for a

  6. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    Science.gov (United States)

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  7. A quantitative comparison of Calvin-Benson cycle models.

    Science.gov (United States)

    Arnold, Anne; Nikoloski, Zoran

    2011-12-01

    The Calvin-Benson cycle (CBC) provides the precursors for biomass synthesis necessary for plant growth. The dynamic behavior and yield of the CBC depend on the environmental conditions and regulation of the cellular state. Accurate quantitative models hold the promise of identifying the key determinants of the tightly regulated CBC function and their effects on the responses in future climates. We provide an integrative analysis of the largest compendium of existing models for photosynthetic processes. Based on the proposed ranking, our framework facilitates the discovery of best-performing models with regard to metabolomics data and of candidates for metabolic engineering.

  8. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger

    OpenAIRE

    Moray, Neville; Groeger, John; Stanton, Neville

    2016-01-01

    This paper shows how to combine field observations, experimental data, and mathematical modeling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example we consider a major railway accident. In 1999 a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, "black box" data, and accident and engineering reports, to construct a case history of the accident. We show how t...

  9. Quantitative versus qualitative modeling: a complementary approach in ecosystem study.

    Science.gov (United States)

    Bondavalli, C; Favilla, S; Bodini, A

    2009-02-01

    Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.

  10. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    Science.gov (United States)

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies.

  11. QuantUM: Quantitative Safety Analysis of UML Models

    Directory of Open Access Journals (Sweden)

    Florian Leitner-Fischer

    2011-07-01

    Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.

  12. EXPERIENCES OF MEANING IN LIFE - A COMBINED QUALITATIVE AND QUANTITATIVE APPROACH

    NARCIS (Netherlands)

    DEBATS, DL; DROST, J; HANSEN, P

    1995-01-01

    The present study investigates the relation of aspects of meaning in life with indices of psychological well-being by means of a combined qualitative and quantitative design. Content analysis of subjects' answers to open questions about personal experiences with meaning in life showed findings that

  13. Experiences of meaning in life: A combined qualitative and quantitative approach

    NARCIS (Netherlands)

    Debats, D.L.H.M.; Drost, J.; Hansen, P.

    1995-01-01

    The present study investigates the relation of aspects of meaning in life with indices of psychological well-being by means of a combined qualitative and quantitative design. Content analysis of subjects' answers to open questions about personal experiences with meaning in life showed findings that

  14. The Vinyl Acetate Content of Packaging Film: A Quantitative Infrared Experiment.

    Science.gov (United States)

    Allpress, K. N.; And Others

    1981-01-01

    Presents an experiment used in laboratory technician training courses to illustrate the quantitative use of infrared spectroscopy which is based on industrial and laboratory procedures for the determination of vinyl acetate levels in ethylene vinyl acetate packaging films. Includes three approaches to allow for varying path lengths (film…

  15. Quantitative magnetospheric models derived from spacecraft magnetometer data

    Science.gov (United States)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  16. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our understan...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....... of biovariability; 2) testing and developing robust techniques and tools as a prerequisite to conducting physiological investigations; 3) defining and adopting standards to facilitate the interoperability of experiments, models, and simulations; 4) and understanding physiological validation as an iterative process...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...

  17. From classical genetics to quantitative genetics to systems biology: modeling epistasis.

    Directory of Open Access Journals (Sweden)

    David L Aylor

    2008-03-01

    Full Text Available Gene expression data has been used in lieu of phenotype in both classical and quantitative genetic settings. These two disciplines have separate approaches to measuring and interpreting epistasis, which is the interaction between alleles at different loci. We propose a framework for estimating and interpreting epistasis from a classical experiment that combines the strengths of each approach. A regression analysis step accommodates the quantitative nature of expression measurements by estimating the effect of gene deletions plus any interaction. Effects are selected by significance such that a reduced model describes each expression trait. We show how the resulting models correspond to specific hierarchical relationships between two regulator genes and a target gene. These relationships are the basic units of genetic pathways and genomic system diagrams. Our approach can be extended to analyze data from a variety of experiments, multiple loci, and multiple environments.

  18. Asynchronous adaptive time step in quantitative cellular automata modeling

    Directory of Open Access Journals (Sweden)

    Sun Yan

    2004-06-01

    Full Text Available Abstract Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment.

  19. Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H.E.; Schober, H.; Gonzalez, M.A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F.J.; Fayos, R.; Dawidowski, J. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M.A.; Vieira, S. [Universidad Autonoma de Madrid (Spain)

    1997-04-01

    The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.

  20. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  1. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  2. A Team Mental Model Perspective of Pre-Quantitative Risk

    Science.gov (United States)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  3. Quantitative risk assessment modeling for nonhomogeneous urban road tunnels.

    Science.gov (United States)

    Meng, Qiang; Qu, Xiaobo; Wang, Xinchang; Yuanita, Vivi; Wong, Siew Chee

    2011-03-01

    Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out.

  4. Quantitative modeling of a gene's expression from its intergenic sequence.

    Directory of Open Access Journals (Sweden)

    Md Abul Hassan Samee

    2014-03-01

    Full Text Available Modeling a gene's expression from its intergenic locus and trans-regulatory context is a fundamental goal in computational biology. Owing to the distributed nature of cis-regulatory information and the poorly understood mechanisms that integrate such information, gene locus modeling is a more challenging task than modeling individual enhancers. Here we report the first quantitative model of a gene's expression pattern as a function of its locus. We model the expression readout of a locus in two tiers: 1 combinatorial regulation by transcription factors bound to each enhancer is predicted by a thermodynamics-based model and 2 independent contributions from multiple enhancers are linearly combined to fit the gene expression pattern. The model does not require any prior knowledge about enhancers contributing toward a gene's expression. We demonstrate that the model captures the complex multi-domain expression patterns of anterior-posterior patterning genes in the early Drosophila embryo. Altogether, we model the expression patterns of 27 genes; these include several gap genes, pair-rule genes, and anterior, posterior, trunk, and terminal genes. We find that the model-selected enhancers for each gene overlap strongly with its experimentally characterized enhancers. Our findings also suggest the presence of sequence-segments in the locus that would contribute ectopic expression patterns and hence were "shut down" by the model. We applied our model to identify the transcription factors responsible for forming the stripe boundaries of the studied genes. The resulting network of regulatory interactions exhibits a high level of agreement with known regulatory influences on the target genes. Finally, we analyzed whether and why our assumption of enhancer independence was necessary for the genes we studied. We found a deterioration of expression when binding sites in one enhancer were allowed to influence the readout of another enhancer. Thus, interference

  5. Modeling Error in Quantitative Macro-Comparative Research

    Directory of Open Access Journals (Sweden)

    Salvatore J. Babones

    2015-08-01

    Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic – but often poorly understood – research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.

  6. A quantitative model for integrating landscape evolution and soil formation

    Science.gov (United States)

    Vanwalleghem, T.; Stockmann, U.; Minasny, B.; McBratney, Alex B.

    2013-06-01

    evolution is closely related to soil formation. Quantitative modeling of the dynamics of soils and landscapes should therefore be integrated. This paper presents a model, named Model for Integrated Landscape Evolution and Soil Development (MILESD), which describes the interaction between pedogenetic and geomorphic processes. This mechanistic model includes the most significant soil formation processes, ranging from weathering to clay translocation, and combines these with the lateral redistribution of soil particles through erosion and deposition. The model is spatially explicit and simulates the vertical variation in soil horizon depth as well as basic soil properties such as texture and organic matter content. In addition, sediment export and its properties are recorded. This model is applied to a 6.25 km2 area in the Werrikimbe National Park, Australia, simulating soil development over a period of 60,000 years. Comparison with field observations shows how the model accurately predicts trends in total soil thickness along a catena. Soil texture and bulk density are predicted reasonably well, with errors of the order of 10%, however, field observations show a much higher organic carbon content than predicted. At the landscape scale, different scenarios with varying erosion intensity result only in small changes of landscape-averaged soil thickness, while the response of the total organic carbon stored in the system is higher. Rates of sediment export show a highly nonlinear response to soil development stage and the presence of a threshold, corresponding to the depletion of the soil reservoir, beyond which sediment export drops significantly.

  7. Determinations of Carbon Dioxide by Titration: New Experiments for General, Physical, and Quantitative Analysis Courses

    Science.gov (United States)

    Crossno, S. K.; Kalbus, L. H.; Kalbus, G. E.

    1996-02-01

    The determination of mixtures containing NaOH and Na2CO3 or Na2CO3 and NaHCO3 by titration is a common experiment in a Quantitative Analysis course. This determination can be adapted for the analysis of CO2 within a sample. The CO2 is released and absorbed in a solution containing excess NaOH. Titration with standard HCl leads to the determination of CO2 present in the sample. A number of interesting experiments in Quantitative Analysis, General and/or Physical Chemistry have been developed. Among these are the following determinations: CO2 content in carbonated beverages, carbonate and bicarbonate in various real life samples, and the molecular weight of CO2.

  8. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  9. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  10. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  11. Quantitative model studies for interfaces in organic electronic devices

    Science.gov (United States)

    Gottfried, J. Michael

    2016-11-01

    In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.

  12. Quantitative identification of technological discontinuities using simulation modeling

    CERN Document Server

    Park, Hyunseok

    2016-01-01

    The aim of this paper is to develop and test metrics to quantitatively identify technological discontinuities in a knowledge network. We developed five metrics based on innovation theories and tested the metrics by a simulation model-based knowledge network and hypothetically designed discontinuity. The designed discontinuity is modeled as a node which combines two different knowledge streams and whose knowledge is dominantly persistent in the knowledge network. The performances of the proposed metrics were evaluated by how well the metrics can distinguish the designed discontinuity from other nodes on the knowledge network. The simulation results show that the persistence times # of converging main paths provides the best performance in identifying the designed discontinuity: the designed discontinuity was identified as one of the top 3 patents with 96~99% probability by Metric 5 and it is, according to the size of a domain, 12~34% better than the performance of the second best metric. Beyond the simulation ...

  13. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  14. A Quantitative Theory Model of a Photobleaching Mechanism

    Institute of Scientific and Technical Information of China (English)

    陈同生; 曾绍群; 周炜; 骆清铭

    2003-01-01

    A photobleaching model:D-P(dye-photon interaction)and D-O(Dye-oxygen oxidative reaction)photobleaching theory,is proposed.The quantitative power dependences of photobleaching rates with both one-and two-photon excitations(1 PE and TPE)are obtained.This photobleaching model can be used to elucidate our and other experimental results commendably.Experimental studies of the photobleaching rates for rhodamine B with TPE under unsaturation conditions reveals that the power dependences of photobleaching rates increase with the increasing dye concentration,and that the photobleaching rate of a single molecule increases in the second power of the excitation intensity,which is different from the high-order(> 3)nonlinear dependence of ensemble molecules.

  15. An infinitesimal model for quantitative trait genomic value prediction.

    Directory of Open Access Journals (Sweden)

    Zhiqiu Hu

    Full Text Available We developed a marker based infinitesimal model for quantitative trait analysis. In contrast to the classical infinitesimal model, we now have new information about the segregation of every individual locus of the entire genome. Under this new model, we propose that the genetic effect of an individual locus is a function of the genome location (a continuous quantity. The overall genetic value of an individual is the weighted integral of the genetic effect function along the genome. Numerical integration is performed to find the integral, which requires partitioning the entire genome into a finite number of bins. Each bin may contain many markers. The integral is approximated by the weighted sum of all the bin effects. We now turn the problem of marker analysis into bin analysis so that the model dimension has decreased from a virtual infinity to a finite number of bins. This new approach can efficiently handle virtually unlimited number of markers without marker selection. The marker based infinitesimal model requires high linkage disequilibrium of all markers within a bin. For populations with low or no linkage disequilibrium, we develop an adaptive infinitesimal model. Both the original and the adaptive models are tested using simulated data as well as beef cattle data. The simulated data analysis shows that there is always an optimal number of bins at which the predictability of the bin model is much greater than the original marker analysis. Result of the beef cattle data analysis indicates that the bin model can increase the predictability from 10% (multiple marker analysis to 33% (multiple bin analysis. The marker based infinitesimal model paves a way towards the solution of genetic mapping and genomic selection using the whole genome sequence data.

  16. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  17. Automated quantitative gait analysis in animal models of movement disorders

    Directory of Open Access Journals (Sweden)

    Vandeputte Caroline

    2010-08-01

    Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.

  18. The optimal hyperspectral quantitative models for chlorophyll-a of chlorella vulgaris

    Science.gov (United States)

    Cheng, Qian; Wu, Xiuju

    2009-09-01

    Chlorophyll-a of Chlorella vulgaris had been related with spectrum. Based on hyperspectral measurement for Chlorella vulgaris, the hyperspectral characteristics of Chlorella vulgaris and their optimal hyperspectral quantitative models of chlorophyll-a (Chla) estimation were researched in situ experiment. The results showed that the optimal hyperspectral quantitative model of Chlorella vulgaris was Chla=180.5+1125787(R700)'+2.4 *109[(R700)']2 (P0Chlorella vulgaris, two reflectance crests were around 540 nm and 700 nm and their locations moved right while Chl-a concentration increased. The reflectance of Chlorella vulgaris decreases with Cha concentration increase in 540 nm, but on the contrary in 700nm.

  19. Quantitative model of the growth of floodplains by vertical accretion

    Science.gov (United States)

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  20. Quantitative Model for Estimating Soil Erosion Rates Using 137Cs

    Institute of Scientific and Technical Information of China (English)

    YANGHAO; GHANGQING; 等

    1998-01-01

    A quantitative model was developed to relate the amount of 137Cs loss from the soil profile to the rate of soil erosion,According th mass balance model,the depth distribution pattern of 137Cs in the soil profile ,the radioactive decay of 137Cs,sampling year and the difference of 137Cs fallout amount among years were taken into consideration.By introducing typical depth distribution functions of 137Cs into the model ,detailed equations for the model were got for different soil,The model shows that the rate of soil erosion is mainly controlled by the depth distrbution pattern of 137Cs ,the year of sampling,and the percentage reduction in total 137Cs,The relationship between the rate of soil loss and 137Cs depletion i neither linear nor logarithmic,The depth distribution pattern of 137Cs is a major factor for estimating the rate of soil loss,Soil erosion rate is directly related with the fraction of 137Cs content near the soil surface. The influences of the radioactive decay of 137Cs,sampling year and 137Cs input fraction are not large compared with others.

  1. Goal relevance as a quantitative model of human task relevance.

    Science.gov (United States)

    Tanner, James; Itti, Laurent

    2017-03-01

    The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Experiments beyond the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.

  3. A Quantitative Model to Estimate Drug Resistance in Pathogens

    Directory of Open Access Journals (Sweden)

    Frazier N. Baker

    2016-12-01

    Full Text Available Pneumocystis pneumonia (PCP is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP and sulfamethoxazole (SMX that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR and dihydropteroate synthase (DHPS, respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50 to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX and another organism (Staphylococcus aureus DHFR/TMP. Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms.

  4. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  5. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  6. Towards a quantitative study of the VUV photolysis of methane: preliminary experiment on trichloromethane

    Science.gov (United States)

    Gans, B.; Boyé-Péronne, S.; Douin, S.; Gauyacq, D.

    2010-01-01

    Photolysis of methane in Titan's stratosphere is the starting point of gas phase carbon chemistry. Quantitative studies of methane photolytic products are of utmost importance for Titan atmosphere models. With this aim, two experimental strategies are presented in this article. Preliminary results demonstrate the possibility of using CRDS absorption coupled with pulsed photolysis on the example of a halogenated derivative of methane: Trichloromethane (CHCl_3).

  7. First principles pharmacokinetic modeling: A quantitative study on Cyclosporin

    DEFF Research Database (Denmark)

    Mošat', Andrej; Lueshen, Eric; Heitzig, Martina

    2013-01-01

    renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...

  8. Quantitative Modeling of the Alternative Pathway of the Complement System.

    Science.gov (United States)

    Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection.

  9. Modeling the Experience of Emotion

    OpenAIRE

    Broekens, Joost

    2009-01-01

    Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers resulting in work that is widely published. The majority of this work consists of computational models of emotion recognition, computational modeling of causal factors of emotion and emotion expression through rendered and robotic faces. A smaller part is concerned with modeling the effects of emotion, formal modeling of cognitive appraisal theory and models of emergent...

  10. Dynamic modelling and analysis of biochemical networks: mechanism-based models and model-based experiments.

    Science.gov (United States)

    van Riel, Natal A W

    2006-12-01

    Systems biology applies quantitative, mechanistic modelling to study genetic networks, signal transduction pathways and metabolic networks. Mathematical models of biochemical networks can look very different. An important reason is that the purpose and application of a model are essential for the selection of the best mathematical framework. Fundamental aspects of selecting an appropriate modelling framework and a strategy for model building are discussed. Concepts and methods from system and control theory provide a sound basis for the further development of improved and dedicated computational tools for systems biology. Identification of the network components and rate constants that are most critical to the output behaviour of the system is one of the major problems raised in systems biology. Current approaches and methods of parameter sensitivity analysis and parameter estimation are reviewed. It is shown how these methods can be applied in the design of model-based experiments which iteratively yield models that are decreasingly wrong and increasingly gain predictive power.

  11. Incorporation of caffeine into a quantitative model of fatigue and sleep.

    Science.gov (United States)

    Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A

    2011-03-21

    A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation.

  12. Quantitative analysis of cardiac tissue including fibroblasts using three-dimensional confocal microscopy and image reconstruction: towards a basis for electrophysiological modeling

    NARCIS (Netherlands)

    Schwab, Bettina C.; Seemann, Gunnar; Lasher, Richard A.; Torres, Natalia S.; Wülfers, Eike M.; Arp, Maren; Carruth, Eric D.; Bridge, John H.B.; Sachse, Frank B.

    2013-01-01

    Electrophysiological modeling of cardiac tissue is commonly based on functional and structural properties measured in experiments. Our knowledge of these properties is incomplete, in particular their remodeling in disease. Here, we introduce a methodology for quantitative tissue characterization bas

  13. A quantitative model of technological catch-up

    Directory of Open Access Journals (Sweden)

    Hossein Gholizadeh

    2015-02-01

    Full Text Available This article presents a quantitative model for the analysis of technological gap. The rates of development of technological leaders and followers in nanotechnology are expressed in terms of coupled equations. On the basis of this model (first step comparative technological gap and rate of that will be studied. We can calculate the dynamics of the gap between leader and follower. In the Second step, we estimate the technology gap using the metafrontier approach. Then we test the relationship between the technology gap and the quality of dimensions of the Catch-up technology which were identified in previous step. The usefulness of this approach is then demonstrated in the analysis of the technological gap of nanotechnology in Iran, the leader in Middle East and the world. We shall present the behaviors of the technological leader and followers. At the end, analyzing Iran position will be identified and implying effective dimension of catch-up Suggestions will be offered which could be a fundamental for long-term policies of Iran.

  14. A quantitative model for assessing community dynamics of pleistocene mammals.

    Science.gov (United States)

    Lyons, S Kathleen

    2005-06-01

    Previous studies have suggested that species responded individualistically to the climate change of the last glaciation, expanding and contracting their ranges independently. Consequently, many researchers have concluded that community composition is plastic over time. Here I quantitatively assess changes in community composition over broad timescales and assess the effect of range shifts on community composition. Data on Pleistocene mammal assemblages from the FAUNMAP database were divided into four time periods (preglacial, full glacial, postglacial, and modern). Simulation analyses were designed to determine whether the degree of change in community composition is consistent with independent range shifts, given the distribution of range shifts observed. Results indicate that many of the communities examined in the United States were more similar through time than expected if individual range shifts were completely independent. However, in each time transition examined, there were areas of nonanalogue communities. I conducted sensitivity analyses to explore how the results were affected by the assumptions of the null model. Conclusions about changes in mammalian distributions and community composition are robust with respect to the assumptions of the model. Thus, whether because of biotic interactions or because of common environmental requirements, community structure through time is more complex than previously thought.

  15. A Mathematical Calculation Model Using Biomarkers to Quantitatively Determine the Relative Source Proportion of Mixed Oils

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    It is difficult to identify the source(s) of mixed oils from multiple source rocks, and in particular the relative contribution of each source rock. Artificial mixing experiments using typical crude oils and ratios of different biomarkers show that the relative contribution changes are non-linear when two oils with different concentrations of biomarkers mix with each other. This may result in an incorrect conclusion if ratios of biomarkers and a simple binary linear equation are used to calculate the contribution proportion of each end-member to the mixed oil. The changes of biomarker ratios with the mixing proportion of end-member oils in the trinal mixing model are more complex than in the binary mixing model. When four or more oils mix, the contribution proportion of each end-member oil to the mixed oil cannot be calculated using biomarker ratios and a simple formula. Artificial mixing experiments on typical oils reveal that the absolute concentrations of biomarkers in the mixed oil cause a linear change with mixing proportion of each end-member. Mathematical inferences verify such linear changes. Some of the mathematical calculation methods using the absolute concentrations or ratios of biomarkers to quantitatively determine the proportion of each end-member in the mixed oils are deduced from the results of artificial experiments and by theoretical inference. Ratio of two biomarker compounds changes as a hyperbola with the mixing proportion in the binary mixing model,as a hyperboloid in the trinal mixing model, and as a hypersurface when mixing more than three endmembers. The mixing proportion of each end-member can be quantitatively determined with these mathematical models, using the absolute concentrations and the ratios of biomarkers. The mathematical calculation model is more economical, convenient, accurate and reliable than conventional artificial mixing methods.

  16. Quantitative property-structural relation modeling on polymeric dielectric materials

    Science.gov (United States)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  17. Determination of Calcium in Cereal with Flame Atomic Absorption Spectroscopy: An Experiment for a Quantitative Methods of Analysis Course

    Science.gov (United States)

    Bazzi, Ali; Kreuz, Bette; Fischer, Jeffrey

    2004-01-01

    An experiment for determination of calcium in cereal using two-increment standard addition method in conjunction with flame atomic absorption spectroscopy (FAAS) is demonstrated. The experiment is intended to introduce students to the principles of atomic absorption spectroscopy giving them hands on experience using quantitative methods of…

  18. Being a quantitative interviewer: qualitatively exploring interviewers' experiences in a longitudinal cohort study

    Directory of Open Access Journals (Sweden)

    Derrett Sarah

    2011-12-01

    Full Text Available Abstract Background Many studies of health outcomes rely on data collected by interviewers administering highly-structured (quantitative questionnaires to participants. Little appears to be known about the experiences of such interviewers. This paper explores interviewer experiences of working on a longitudinal study in New Zealand (the Prospective Outcomes of injury Study - POIS. Interviewers administer highly-structured questionnaires to participants, usually by telephone, and enter data into a secure computer program. The research team had expectations of interviewers including: consistent questionnaire administration, timeliness, proportions of potential participants recruited and an empathetic communication style. This paper presents results of a focus group to qualitatively explore with the team of interviewers their experiences, problems encountered, strategies, support systems used and training. Methods A focus group with interviewers involved in the POIS interviews was held; it was audio-recorded and transcribed. The analytical method was thematic, with output intended to be descriptive and interpretive. Results Nine interviewers participated in the focus group (average time in interviewer role was 31 months. Key themes were: 1 the positive aspects of the quantitative interviewer role (i.e. relationships and resilience, insights gained, and participants' feedback, 2 difficulties interviewers encountered and solutions identified (i.e. stories lost or incomplete, forgotten appointments, telling the stories, acknowledging distress, stories reflected and debriefing and support, and 3 meeting POIS researcher expectations (i.e. performance standards, time-keeping, dealing exclusively with the participant and maintaining privacy. Conclusions Interviewers demonstrated great skill in the way they negotiated research team expectations whilst managing the relationships with participants. Interviewers found it helpful to have a research protocol in

  19. Towards Generic Models of Player Experience

    DEFF Research Database (Denmark)

    Shaker, Noor; Shaker, Mohammad; Abou-Zleikha, Mohamed

    2015-01-01

    further examine whether generic features of player be- haviour can be defined and used to boost the modelling per- formance. The accuracies obtained in both experiments in- dicate a promise for the proposed approach and suggest that game-independent player experience models can be built.......-dependent and their applicability is usually limited to the system and the data used for model construction. Establishing models of user experience that are highly scalable while maintaing the performance constitutes an important research direction. In this paper, we propose generic models of user experience in the computer games...... domain. We employ two datasets collected from players in- teractions with two games from different genres where accu- rate models of players experience were previously built. We take the approach one step further by investigating the mod- elling mechanism ability to generalise over the two datasets. We...

  20. Murine model of disseminated fusariosis: evaluation of the fungal burden by traditional CFU and quantitative PCR.

    Science.gov (United States)

    González, Gloria M; Márquez, Jazmín; Treviño-Rangel, Rogelio de J; Palma-Nicolás, José P; Garza-González, Elvira; Ceceñas, Luis A; Gerardo González, J

    2013-10-01

    Systemic disease is the most severe clinical form of fusariosis, and the treatment involves a challenge due to the refractory response to antifungals. Treatment for murine Fusarium solani infection has been described in models that employ CFU quantitation in organs as a parameter of therapeutic efficacy. However, CFU counts do not precisely reproduce the amount of cells for filamentous fungi such as F. solani. In this study, we developed a murine model of disseminated fusariosis and compared the fungal burden with two methods: CFU and quantitative PCR. ICR and BALB/c mice received an intravenous injection of 1 × 10(7) conidia of F. solani per mouse. On days 2, 5, 7, and 9, mice from each mice strain were killed. The spleen and kidneys of each animal were removed and evaluated by qPCR and CFU determinations. Results from CFU assay indicated that the spleen and kidneys had almost the same fungal burden in both BALB/c and ICR mice during the days of the evaluation. In the qPCR assay, the spleen and kidney of each mouse strain had increased fungal burden in each determination throughout the entire experiment. The fungal load determined by the qPCR assay was significantly greater than that determined from CFU measurements of tissue. qPCR could be considered as a tool for quantitative evaluation of fungal burden in experimental disseminated F. solani infection.

  1. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  2. Epistasis analysis for quantitative traits by functional regression model.

    Science.gov (United States)

    Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao

    2014-06-01

    The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study.

  3. Modelling bacterial growth in quantitative microbiological risk assessment: is it possible?

    Science.gov (United States)

    Nauta, Maarten J

    2002-03-01

    Quantitative microbiological risk assessment (QMRA), predictive modelling and HACCP may be used as tools to increase food safety and can be integrated fruitfully for many purposes. However, when QMRA is applied for public health issues like the evaluation of the status of public health, existing predictive models may not be suited to model bacterial growth. In this context, precise quantification of risks is more important than in the context of food manufacturing alone. In this paper, the modular process risk model (MPRM) is briefly introduced as a QMRA modelling framework. This framework can be used to model the transmission of pathogens through any food pathway, by assigning one of six basic processes (modules) to each of the processing steps. Bacterial growth is one of these basic processes. For QMRA, models of bacterial growth need to be expressed in terms of probability, for example to predict the probability that a critical concentration is reached within a certain amount of time. In contrast, available predictive models are developed and validated to produce point estimates of population sizes and therefore do not fit with this requirement. Recent experience from a European risk assessment project is discussed to illustrate some of the problems that may arise when predictive growth models are used in QMRA. It is suggested that a new type of predictive models needs to be developed that incorporates modelling of variability and uncertainty in growth.

  4. Numerical experiments modelling turbulent flows

    Directory of Open Access Journals (Sweden)

    Trefilík Jiří

    2014-03-01

    Full Text Available The work aims at investigation of the possibilities of modelling transonic flows mainly in external aerodynamics. New results are presented and compared with reference data and previously achieved results. For the turbulent flow simulations two modifications of the basic k – ω model are employed: SST and TNT. The numerical solution was achieved by using the MacCormack scheme on structured non-ortogonal grids. Artificial dissipation was added to improve the numerical stability.

  5. Towards Generic Models of Player Experience

    DEFF Research Database (Denmark)

    Shaker, Noor; Shaker, Mohammad; Abou-Zleikha, Mohamed

    2015-01-01

    -dependent and their applicability is usually limited to the system and the data used for model construction. Establishing models of user experience that are highly scalable while maintaing the performance constitutes an important research direction. In this paper, we propose generic models of user experience in the computer games...... further examine whether generic features of player be- haviour can be defined and used to boost the modelling per- formance. The accuracies obtained in both experiments in- dicate a promise for the proposed approach and suggest that game-independent player experience models can be built.......Context personalisation is a flourishing area of research with many applications. Context personalisation systems usually employ a user model to predict the appeal of the context to a particular user given a history of interactions. Most of the models used are context...

  6. Firn Model Inter-Comparison Experiment (FirnMICE) (Invited)

    Science.gov (United States)

    Lundin, J.; Arthern, R. J.; Buizert, C.; Cummings, E.; Essery, R.; Ligtenberg, S.; Orsi, A. J.; Simonsen, S. B.; Brook, E.; Leahy, W.; Stevens, C.; Harris, P.; Waddington, E. D.

    2013-12-01

    Firn evolution plays important roles in glaciology; however, the physical formulation of the compaction law, including sensitivities to temperature and accumulation rate, is an active research topic. We forced 10 firn-densification models in 6 different experiments by altering temperature and accumulation-rate boundary conditions and compared the steady-state and transient behavior of the models. We find that the models produce different results in both steady-state and transient modes for a suite of metrics, including depth-density and depth-age profiles. We use this study to quantitatively characterize the differences between firn models; to provide a benchmark of results for future models; to provide a basis to quantify model uncertainties; and to guide future directions of firn-densification modeling.

  7. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...

  8. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe......Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes...... rate and temperature. Firn Model Intercomparison Experiment can provide a benchmark of results for future models, provide a basis to quantify model uncertainties and guide future directions of firn-densification modeling....

  9. Herd immunity and pneumococcal conjugate vaccine: a quantitative model.

    Science.gov (United States)

    Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S

    2007-07-20

    Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non

  10. Edesign: Primer and Enhanced Internal Probe Design Tool for Quantitative PCR Experiments and Genotyping Assays.

    Directory of Open Access Journals (Sweden)

    Yasumasa Kimura

    Full Text Available Analytical PCR experiments preferably use internal probes for monitoring the amplification reaction and specific detection of the amplicon. Such internal probes have to be designed in close context with the amplification primers, and may require additional considerations for the detection of genetic variations. Here we describe Edesign, a new online and stand-alone tool for designing sets of PCR primers together with an internal probe for conducting quantitative real-time PCR (qPCR and genotypic experiments. Edesign can be used for selecting standard DNA oligonucleotides like for instance TaqMan probes, but has been further extended with new functions and enhanced design features for Eprobes. Eprobes, with their single thiazole orange-labelled nucleotide, allow for highly sensitive genotypic assays because of their higher DNA binding affinity as compared to standard DNA oligonucleotides. Using new thermodynamic parameters, Edesign considers unique features of Eprobes during primer and probe design for establishing qPCR experiments and genotyping by melting curve analysis. Additional functions in Edesign allow probe design for effective discrimination between wild-type sequences and genetic variations either using standard DNA oligonucleotides or Eprobes. Edesign can be freely accessed online at http://www.dnaform.com/edesign2/, and the source code is available for download.

  11. Edesign: Primer and Enhanced Internal Probe Design Tool for Quantitative PCR Experiments and Genotyping Assays.

    Science.gov (United States)

    Kimura, Yasumasa; Soma, Takahiro; Kasahara, Naoko; Delobel, Diane; Hanami, Takeshi; Tanaka, Yuki; de Hoon, Michiel J L; Hayashizaki, Yoshihide; Usui, Kengo; Harbers, Matthias

    2016-01-01

    Analytical PCR experiments preferably use internal probes for monitoring the amplification reaction and specific detection of the amplicon. Such internal probes have to be designed in close context with the amplification primers, and may require additional considerations for the detection of genetic variations. Here we describe Edesign, a new online and stand-alone tool for designing sets of PCR primers together with an internal probe for conducting quantitative real-time PCR (qPCR) and genotypic experiments. Edesign can be used for selecting standard DNA oligonucleotides like for instance TaqMan probes, but has been further extended with new functions and enhanced design features for Eprobes. Eprobes, with their single thiazole orange-labelled nucleotide, allow for highly sensitive genotypic assays because of their higher DNA binding affinity as compared to standard DNA oligonucleotides. Using new thermodynamic parameters, Edesign considers unique features of Eprobes during primer and probe design for establishing qPCR experiments and genotyping by melting curve analysis. Additional functions in Edesign allow probe design for effective discrimination between wild-type sequences and genetic variations either using standard DNA oligonucleotides or Eprobes. Edesign can be freely accessed online at http://www.dnaform.com/edesign2/, and the source code is available for download.

  12. Quantitative image reconstruction for dual-isotope parathyroid SPECT/CT: phantom experiments and sample patient studies

    Science.gov (United States)

    Shcherbinin, S.; Chamoiseau, S.; Celler, A.

    2012-08-01

    We investigated the quantitative accuracy of the model-based dual-isotope single-photon emission computed tomography (DI-SPECT) reconstructions that use Klein-Nishina expressions to estimate the scattered photon contributions to the projection data. Our objective was to examine the ability of the method to recover the absolute activities pertaining to both radiotracers: Tc-99m and I-123. We validated our method through a series of phantom experiments performed using a clinical hybrid SPECT/CT camera (Infinia Hawkeye, GE Healthcare). Different activity ratios and different attenuating media were used in these experiments to create cross-talk effects of varying severity, which can occur in clinical studies. Accurate model-based corrections for scatter and cross-talk with CT attenuation maps allowed for the recovery of the absolute activities from DI-SPECT/CT scans with errors that ranged 0-10% for both radiotracers. The unfavorable activity ratios increased the computational burden but practically did not affect the resulting accuracy. The visual analysis of parathyroid patient data demonstrated that our model-based processing improved adenoma/background contrast and enhanced localization of small or faint adenomas.

  13. Quantitative analysis results of CE-1 X-ray fluorescence spectrometer ground base experiment

    Institute of Scientific and Technical Information of China (English)

    CUI Xing-Zhu; GAO Min; YANG Jia-Wei; WANG Huan-Yu; ZHANG Cheng-Mo; CHEN Yong; ZHANG Jia-Yu; PENG Wen-Xi; CAO Xue-Lei; LIANG Xiao-Hua; WANG Jin-Zhou

    2008-01-01

    As the nearest celestial body to the earth, the moon has become a hot spot again in astronomy field recently. The element analysis is a much important subject in many lunar projects. Remote X-ray spectrometry plays an important role in the geochemical exploration of the solar bodies. Because of th equasi-vacuum atmosphere on the moon, which has no absorption of X-ray, the X-ray fluorescence analysis is an effective way to determine the elemental abundance of lunar surface. The CE-1 X-ray fluorescence spectrometer (CE-1/XFS) aims to map the major elemental compositions on the lunar surface. This paper describes a method for quantitative analysis of elemental compositions. A series of ground base experiments are done to examine the capability of XFS. The obtained results, which show a reasonable agreement with the certified values at a 30% uncertainty level for major elements, are presented.

  14. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions.

    Science.gov (United States)

    Barrett, Frederick S; Robbins, Hollis; Smooke, David; Brown, Jenine L; Griffiths, Roland R

    2017-01-01

    Psilocybin is a classic (serotonergic) hallucinogen ("psychedelic" drug) that may occasion mystical experiences (characterized by a profound feeling of oneness or unity) during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli) and quantitative analysis (using signal processing and music-information retrieval methods) of 22 of these stimuli yielded a description of peak period music that was characterized by regular, predictable

  15. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions

    Directory of Open Access Journals (Sweden)

    Frederick S. Barrett

    2017-07-01

    Full Text Available Psilocybin is a classic (serotonergic hallucinogen (“psychedelic” drug that may occasion mystical experiences (characterized by a profound feeling of oneness or unity during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli and quantitative analysis (using signal processing and music-information retrieval methods of 22 of these stimuli yielded a description of peak period music that was characterized by regular

  16. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions

    Science.gov (United States)

    Barrett, Frederick S.; Robbins, Hollis; Smooke, David; Brown, Jenine L.; Griffiths, Roland R.

    2017-01-01

    Psilocybin is a classic (serotonergic) hallucinogen (“psychedelic” drug) that may occasion mystical experiences (characterized by a profound feeling of oneness or unity) during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli) and quantitative analysis (using signal processing and music-information retrieval methods) of 22 of these stimuli yielded a description of peak period music that was characterized by regular, predictable

  17. A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model

    Science.gov (United States)

    2007-06-01

    12TH ICCRTS “Adapting C2 to the 21st Century” A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model Tracks...Lenahan2 identified metrics and techniques for adversarial C2 process modeling . We intend to further that work by developing a set of adversarial process ...Approaches in an Adversarial Process Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  18. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  19. The genetic architecture of heterochronsy as a quantitative trait: lessons from a computational model.

    Science.gov (United States)

    Sun, Lidan; Sang, Mengmeng; Zheng, Chenfei; Wang, Dongyang; Shi, Hexin; Liu, Kaiyue; Guo, Yanfang; Cheng, Tangren; Zhang, Qixiang; Wu, Rongling

    2017-05-30

    Heterochrony is known as a developmental change in the timing or rate of ontogenetic events across phylogenetic lineages. It is a key concept synthesizing development into ecology and evolution to explore the mechanisms of how developmental processes impact on phenotypic novelties. A number of molecular experiments using contrasting organisms in developmental timing have identified specific genes involved in heterochronic variation. Beyond these classic approaches that can only identify single genes or pathways, quantitative models derived from current next-generation sequencing data serve as a more powerful tool to precisely capture heterochronic variation and systematically map a complete set of genes that contribute to heterochronic processes. In this opinion note, we discuss a computational framework of genetic mapping that can characterize heterochronic quantitative trait loci that determine the pattern and process of development. We propose a unifying model that charts the genetic architecture of heterochrony that perceives and responds to environmental perturbations and evolves over geologic time. The new model may potentially enhance our understanding of the adaptive value of heterochrony and its evolutionary origins, providing a useful context for designing new organisms that can best use future resources. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Formal modeling and quantitative evaluation for information system survivability based on PEPA

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Hui-qiang; ZHAO Guo-sheng

    2008-01-01

    Survivability should be considered beyond security for information system. To assess system survivability accurately, for improvement, a formal modeling and analysis method based on stochastic process algebra is proposed in this article. By abstracting the interactive behaviors between intruders and information system, a transferring graph of system state oriented survivability is constructed. On that basis, parameters are defined and system behaviors are characterized precisely with performance evaluation process algebra (PEPA), simultaneously considering the influence of different attack modes. Ultimately the formal model for survivability is established and quantitative analysis results are obtained by PEPA Workbench tool. Simulation experiments show the effectiveness and feasibility of the developed method, and it can help to direct the designation of survivable system.

  1. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    Science.gov (United States)

    Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W

    2016-11-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  2. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe......Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes...

  3. ASSETS MANAGEMENT - A CONCEPTUAL MODEL DECOMPOSING VALUE FOR THE CUSTOMER AND A QUANTITATIVE MODEL

    Directory of Open Access Journals (Sweden)

    Susana Nicola

    2015-03-01

    Full Text Available In this paper we describe de application of a modeling framework, the so-called Conceptual Model Decomposing Value for the Customer (CMDVC, in a Footwear Industry case study, to ascertain the usefulness of this approach. The value networks were used to identify the participants, both tangible and intangible deliverables/endogenous and exogenous assets, and the analysis of their interactions as the indication for an adequate value proposition. The quantitative model of benefits and sacrifices, using the Fuzzy AHP method, enables the discussion of how the CMDVC can be applied and used in the enterprise environment and provided new relevant relations between perceived benefits (PBs.

  4. Quantitative Validation of a Human Body Finite Element Model Using Rigid Body Impacts.

    Science.gov (United States)

    Vavalle, Nicholas A; Davis, Matthew L; Stitzel, Joel D; Gayzik, F Scott

    2015-09-01

    Validation is a critical step in finite element model (FEM) development. This study focuses on the validation of the Global Human Body Models Consortium full body average male occupant FEM in five localized loading regimes-a chest impact, a shoulder impact, a thoracoabdominal impact, an abdominal impact, and a pelvic impact. Force and deflection outputs from the model were compared to experimental traces and corridors scaled to the 50th percentile male. Predicted fractures and injury severity measures were compared to evaluate the model's injury prediction capabilities. The methods of ISO/TS 18571 were used to quantitatively assess the fit of model outputs to experimental force and deflection traces. The model produced peak chest, shoulder, thoracoabdominal, abdominal, and pelvis forces of 4.8, 3.3, 4.5, 5.1, and 13.0 kN compared to 4.3, 3.2, 4.0, 4.0, and 10.3 kN in the experiments, respectively. The model predicted rib and pelvic fractures related to Abbreviated Injury Scale scores within the ranges found experimentally all cases except the abdominal impact. ISO/TS 18571 scores for the impacts studied had a mean score of 0.73 with a range of 0.57-0.83. Well-validated FEMs are important tools used by engineers in advancing occupant safety.

  5. Data assimilation experiments with MPIESM climate model

    Directory of Open Access Journals (Sweden)

    Belyaev Konstantin

    2016-01-01

    Full Text Available Further development of data assimilation technique and its application in numerical experiments with state-of-the art Max Plank Institute Earth System model have been carried out. In particularly, the stability problem of assimilation is posed and discussed In the experiments the sea surface height data from archive Archiving, Validating and Interpolating Satellite Ocean have been used. All computations have been realized on cluster system of German Climate Computing Center. The results of numerical experiments with and without assimilation were recorded and analyzed. A special attention has been focused on the Arctic zone. It is shown that there is a good coincidence of model tendencies and independent data.

  6. Understanding responder neurobiology in schizophrenia using a quantitative systems pharmacology model: application to iloperidone.

    Science.gov (United States)

    Geerts, Hugo; Roberts, Patrick; Spiros, Athan; Potkin, Steven

    2015-04-01

    The concept of targeted therapies remains a holy grail for the pharmaceutical drug industry for identifying responder populations or new drug targets. Here we provide quantitative systems pharmacology as an alternative to the more traditional approach of retrospective responder pharmacogenomics analysis and applied this to the case of iloperidone in schizophrenia. This approach implements the actual neurophysiological effect of genotypes in a computer-based biophysically realistic model of human neuronal circuits, is parameterized with human imaging and pathology, and is calibrated by clinical data. We keep the drug pharmacology constant, but allowed the biological model coupling values to fluctuate in a restricted range around their calibrated values, thereby simulating random genetic mutations and representing variability in patient response. Using hypothesis-free Design of Experiments methods the dopamine D4 R-AMPA (receptor-alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptor coupling in cortical neurons was found to drive the beneficial effect of iloperidone, likely corresponding to the rs2513265 upstream of the GRIA4 gene identified in a traditional pharmacogenomics analysis. The serotonin 5-HT3 receptor-mediated effect on interneuron gamma-aminobutyric acid conductance was identified as the process that moderately drove the differentiation of iloperidone versus ziprasidone. This paper suggests that reverse-engineered quantitative systems pharmacology is a powerful alternative tool to characterize the underlying neurobiology of a responder population and possibly identifying new targets. © The Author(s) 2015.

  7. Argonne Bubble Experiment Thermal Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  8. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong

    2013-01-01

    Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  9. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  10. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    Directory of Open Access Journals (Sweden)

    David A Van Valen

    2016-11-01

    Full Text Available Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  11. Evaluation of Key Aroma Compounds in Processed Prawns (Whiteleg Shrimp) by Quantitation and Aroma Recombination Experiments.

    Science.gov (United States)

    Mall, Veronika; Schieberle, Peter

    2017-03-24

    In our previous study on the aroma compounds of heated prawn meat, the main odorants in blanched (BPM) and fried prawn meat (FPM), respectively, were characterized by means of gas chromatography-olfactometry and aroma extract dilution analysis. In this follow-up study, these aroma compounds were quantified by means of stable isotope dilution assays, and odor activity values (OAV; ratio of concentration to odor detection threshold) were calculated. Results revealed 2-acetyl-1-pyrroline and (Z)-1,5-octadien-3-one as the most potent odor-active compounds in both prawn samples. In FPM, as compared to BPM, higher OAVs were determined for 2-acetyl-1-pyrroline, 2-acetyl-2-thiazoline, 3-methylbutanal, 3-(methylthio)propanal, phenylacetaldehyde, 3-hydroxy-4,5-dimethyl-2(5H)-furanone, 4-hydroxy-2,5-dimethyl-3(2H)-furanone, 2,3-diethyl-5-methylpyrazine, and trimethylpyrazine. Aroma recombination experiments corroborated that the overall aroma of the blanched as well as the fried prawn meat, respectively, could well be mimicked by the set of key odorants quantitated in this study.

  12. CFD and FEM modeling of PPOOLEX experiments

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-01-15

    Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)

  13. Improving quantitative precipitation nowcasting with a local ensemble transform Kalman filter radar data assimilation system: observing system simulation experiments

    Directory of Open Access Journals (Sweden)

    Chih-Chien Tsai

    2014-03-01

    Full Text Available This study develops a Doppler radar data assimilation system, which couples the local ensemble transform Kalman filter with the Weather Research and Forecasting model. The benefits of this system to quantitative precipitation nowcasting (QPN are evaluated with observing system simulation experiments on Typhoon Morakot (2009, which brought record-breaking rainfall and extensive damage to central and southern Taiwan. The results indicate that the assimilation of radial velocity and reflectivity observations improves the three-dimensional winds and rain-mixing ratio most significantly because of the direct relations in the observation operator. The patterns of spiral rainbands become more consistent between different ensemble members after radar data assimilation. The rainfall intensity and distribution during the 6-hour deterministic nowcast are also improved, especially for the first 3 hours. The nowcasts with and without radar data assimilation have similar evolution trends driven by synoptic-scale conditions. Furthermore, we carry out a series of sensitivity experiments to develop proper assimilation strategies, in which a mixed localisation method is proposed for the first time and found to give further QPN improvement in this typhoon case.

  14. Quantitative, comprehensive, analytical model for magnetic reconnection in Hall magnetohydrodynamics.

    Science.gov (United States)

    Simakov, Andrei N; Chacón, L

    2008-09-05

    Dissipation-independent, or "fast", magnetic reconnection has been observed computationally in Hall magnetohydrodynamics (MHD) and predicted analytically in electron MHD. However, a quantitative analytical theory of reconnection valid for arbitrary ion inertial lengths, d{i}, has been lacking and is proposed here for the first time. The theory describes a two-dimensional reconnection diffusion region, provides expressions for reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and d{i}. It also confirms the electron MHD prediction that both open and elongated diffusion regions allow fast reconnection, and reveals strong dependence of the reconnection rates on d{i}.

  15. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Science.gov (United States)

    2011-05-18

    ... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... development of regulatory guidance for using risk information related to digital systems in the...

  16. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn [Physics Department, Carleton University, Ottawa, Ontario K1S 5B6, Canada and Cardiology, The University of Ottawa Heart Institute, Ottawa, Ontario K1Y4W7 (Canada)

    2016-01-15

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but their level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE

  17. Solar models, neutrino experiments, and helioseismology

    Science.gov (United States)

    Bahcall, John N.; Ulrich, Roger K.

    1988-01-01

    The event rates and their recognized uncertainties are calculated for 11 solar neutrino experiments using accurate solar models. These models are also used to evaluate the frequency spectrum of the p and g oscillations modes of the sun. It is shown that the discrepancy between the predicted and observed event rates in the Cl-37 and Kamiokande II experiments cannot be explained by a 'likely' fluctuation in input parameters with the best estimates and uncertainties given in the present study. It is suggested that, whatever the correct solution to the solar neutrino problem, it is unlikely to be a 'trival' error.

  18. Statistical Model to Analyze Quantitative Proteomics Data Obtained by 18O/16O Labeling and Linear Ion Trap Mass Spectrometry

    Science.gov (United States)

    Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús

    2009-01-01

    Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins

  19. Quantitative skills as a graduate learning outcome of university science degree programmes: student performance explored through theplanned-enacted-experiencedcurriculum model

    Science.gov (United States)

    Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn

    2016-07-01

    Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.

  20. Hydrolysis Studies and Quantitative Determination of Aluminum Ions Using [superscript 27]Al NMR: An Undergraduate Analytical Chemistry Experiment

    Science.gov (United States)

    Curtin, Maria A.; Ingalls, Laura R.; Campbell, Andrew; James-Pederson, Magdalena

    2008-01-01

    This article describes a novel experiment focused on metal ion hydrolysis and the equilibria related to metal ions in aqueous systems. Using [superscript 27]Al NMR, the students become familiar with NMR spectroscopy as a quantitative analytical tool for the determination of aluminum by preparing a standard calibration curve using standard aluminum…

  1. Photon-tissue interaction model for quantitative assessment of biological tissues

    Science.gov (United States)

    Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann

    2014-02-01

    In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.

  2. A Didactic Experiment and Model of a Flat-Plate Solar Collector

    Science.gov (United States)

    Gallitto, Aurelio Agliolo; Fiordilino, Emilio

    2011-01-01

    We report on an experiment performed with a home-made flat-plate solar collector, carried out together with high-school students. To explain the experimental results, we propose a model that describes the heating process of the solar collector. The model accounts quantitatively for the experimental data. We suggest that solar-energy topics should…

  3. Modular System Modeling for Quantitative Reliability Evaluation of Technical Systems

    Directory of Open Access Journals (Sweden)

    Stephan Neumann

    2016-01-01

    Full Text Available In modern times, it is necessary to offer reliable products to match the statutory directives concerning product liability and the high expectations of customers for durable devices. Furthermore, to maintain a high competitiveness, engineers need to know as accurately as possible how long their product will last and how to influence the life expectancy without expensive and time-consuming testing. As the components of a system are responsible for the system reliability, this paper introduces and evaluates calculation methods for life expectancy of common machine elements in technical systems. Subsequently, a method for the quantitative evaluation of the reliability of technical systems is proposed and applied to a heavy-duty power shift transmission.

  4. Improvement of the ID model for quantitative network data

    DEFF Research Database (Denmark)

    Sørensen, Peter Borgen; Damgaard, Christian Frølund; Dupont, Yoko Luise

    2015-01-01

    )1. This presentation will illustrate the application of the ID method based on a data set which consists of counts of visits by 152 pollinator species to 16 plant species. The method is based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi...... reproduce the high number of zero valued cells in the data set and mimic the sampling distribution. 1 Sørensen et al, Journal of Pollination Ecology, 6(18), 2011, pp129-139......Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks...

  5. Deficiencies in quantitative precipitation forecasts. Sensitivity studies using the COSMO model

    Energy Technology Data Exchange (ETDEWEB)

    Dierer, Silke [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Meteotest, Bern (Switzerland); Arpagaus, Marco [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Seifert, Axel [Deutscher Wetterdienst, Offenbach (Germany); Avgoustoglou, Euripides [Hellenic National Meteorological Service, Hellinikon (Greece); Dumitrache, Rodica [National Meteorological Administration, Bucharest (Romania); Grazzini, Federico [Agenzia Regionale per la Protezione Ambientale Emilia Romagna, Bologna (Italy); Mercogliano, Paola [Italian Aerospace Research Center, Capua (Italy); Milelli, Massimo [Agenzia Regionale per la Protezione Ambientale Piemonte, Torino (Italy); Starosta, Katarzyna [Inst. of Meteorology and Water Management, Warsaw (Poland)

    2009-12-15

    The quantitative precipitation forecast (QPF) of the COSMO model, like of other models, reveals some deficiencies. The aim of this study is to investigate which physical and numerical schemes have the strongest impact on QPF and, thus, have the highest potential for improving QPF. Test cases are selected that are meant to reflect typical forecast errors in different countries. The 13 test cases fall into two main groups: overestimation of stratiform precipitation (6 cases) and underestimation of convective precipitation (5 cases). 22 sensitivity experiments predominantly regarding numerical and physical schemes are performed. The area averaged 24 h precipitation sums arc evaluated. The results show that the strongest impact on QPF is caused by changes of the initial atmospheric humidity and by using the Kain-Fritsch/Bechtold convection scheme instead of the Tiedtke scheme. Both sensitivity experiments change the area averaged precipitation in the range of 30-35%. This clearly shows that improved simulation of atmospheric water vapour is of utmost importance to achieve better precipitation forecasts. Significant changes are also caused by using the Runge-Kutta time integration scheme instead of the Leapfrog scheme, by applying a modified warm rain and snow physics scheme or a modified Tiedtke convection scheme. The fore-mentioned changes result in differences of area averaged precipitation of roughly 20%. Only for Greek lest cases, which all have a strong influence from the sea, the heat and moisture exchange between surface and atmosphere is of great importance and can cause changes of up to 20%. (orig.)

  6. Quantitative assessment of meteorological and tropospheric Zenith Hydrostatic Delay models

    Science.gov (United States)

    Zhang, Di; Guo, Jiming; Chen, Ming; Shi, Junbo; Zhou, Lv

    2016-09-01

    Tropospheric delay has always been an important issue in GNSS/DORIS/VLBI/InSAR processing. Most commonly used empirical models for the determination of tropospheric Zenith Hydrostatic Delay (ZHD), including three meteorological models and two empirical ZHD models, are carefully analyzed in this paper. Meteorological models refer to UNB3m, GPT2 and GPT2w, while ZHD models include Hopfield and Saastamoinen. By reference to in-situ meteorological measurements and ray-traced ZHD values of 91 globally distributed radiosonde sites, over a four-years period from 2010 to 2013, it is found that there is strong correlation between errors of model-derived values and latitudes. Specifically, the Saastamoinen model shows a systematic error of about -3 mm. Therefore a modified Saastamoinen model is developed based on the "best average" refractivity constant, and is validated by radiosonde data. Among different models, the GPT2w and the modified Saastamoinen model perform the best. ZHD values derived from their combination have a mean bias of -0.1 mm and a mean RMS of 13.9 mm. Limitations of the present models are discussed and suggestions for further improvements are given.

  7. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  8. SSC 40 mm short model construction experience

    Energy Technology Data Exchange (ETDEWEB)

    Bossert, R.C.; Brandt, J.S.; Carson, J.A.; Dickey, C.E.; Gonczy, I.; Koska, W.A.; Strait, J.B.

    1990-04-01

    Several short model SSC magnets have been built and tested at Fermilab. They establish a preliminary step toward the construction of SSC long models. Many aspects of magnet design and construction are involved. Experience includes coil winding, curing and measuring, coil end part design and fabrication, ground insulation, instrumentation, collaring and yoke assembly. Fabrication techniques are explained. Design of tooling and magnet components not previously incorporated into SSC magnets are described. 14 refs., 18 figs., 2 tabs.

  9. Complementary social science? Quali-quantitative experiments in a Big Data world

    Directory of Open Access Journals (Sweden)

    Anders Blok

    2014-08-01

    Full Text Available The rise of Big Data in the social realm poses significant questions at the intersection of science, technology, and society, including in terms of how new large-scale social databases are currently changing the methods, epistemologies, and politics of social science. In this commentary, we address such epochal (“large-scale” questions by way of a (situated experiment: at the Danish Technical University in Copenhagen, an interdisciplinary group of computer scientists, physicists, economists, sociologists, and anthropologists (including the authors is setting up a large-scale data infrastructure, meant to continually record the digital traces of social relations among an entire freshman class of students ( N  > 1000. At the same time, fieldwork is carried out on friendship (and other relations amongst the same group of students. On this basis, the question we pose is the following: what kind of knowledge is obtained on this social micro-cosmos via the Big (computational, quantitative and Small (embodied, qualitative Data, respectively? How do the two relate? Invoking Bohr’s principle of complementarity as analogy, we hypothesize that social relations, as objects of knowledge, depend crucially on the type of measurement device deployed. At the same time, however, we also expect new interferences and polyphonies to arise at the intersection of Big and Small Data, provided that these are, so to speak, mixed with care. These questions, we stress, are important not only for the future of social science methods but also for the type of societal (self-knowledge that may be expected from new large-scale social databases.

  10. Complementary social science? Quali-quantitative experiments in a Big Data world

    Directory of Open Access Journals (Sweden)

    Anders Blok

    2014-08-01

    Full Text Available The rise of Big Data in the social realm poses significant questions at the intersection of science, technology, and society, including in terms of how new large-scale social databases are currently changing the methods, epistemologies, and politics of social science. In this commentary, we address such epochal (“large-scale” questions by way of a (situated experiment: at the Danish Technical University in Copenhagen, an interdisciplinary group of computer scientists, physicists, economists, sociologists, and anthropologists (including the authors is setting up a large-scale data infrastructure, meant to continually record the digital traces of social relations among an entire freshman class of students (N > 1000. At the same time, fieldwork is carried out on friendship (and other relations amongst the same group of students. On this basis, the question we pose is the following: what kind of knowledge is obtained on this social micro-cosmos via the Big (computational, quantitative and Small (embodied, qualitative Data, respectively? How do the two relate? Invoking Bohr’s principle of complementarity as analogy, we hypothesize that social relations, as objects of knowledge, depend crucially on the type of measurement device deployed. At the same time, however, we also expect new interferences and polyphonies to arise at the intersection of Big and Small Data, provided that these are, so to speak, mixed with care. These questions, we stress, are important not only for the future of social science methods but also for the type of societal (self-knowledge that may be expected from new large-scale social databases.

  11. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    Science.gov (United States)

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

  12. A global quantitative survey of hemostatic assessment in postpartum hemorrhage and experience with associated bleeding disorders.

    Science.gov (United States)

    James, Andra H; Cooper, David L; Paidas, Michael J

    2017-01-01

    Coagulopathy may be a serious complicating or contributing factor to postpartum hemorrhage (PPH), and should be promptly recognized to ensure proper bleeding management. This study aims to evaluate the approaches of obstetrician-gynecologists worldwide towards assessing massive PPH caused by underlying bleeding disorders. A quantitative survey was completed by 302 obstetrician-gynecologists from 6 countries (the UK, France, Germany, Italy, Spain, and Japan). The survey included questions on the use of hematologic laboratory studies, interpretation of results, laboratory's role in coagulation assessments, and experience with bleeding disorders. Overall, the most common definitions of "massive" PPH were >2,000 mL (39%) and >1,500 mL (34%) blood loss. The most common criteria for rechecking a "stat" complete blood count and for performing coagulation studies were a drop in blood pressure (73%) and ongoing visible bleeding (78%), respectively. Laboratory coagulation (prothrombin time/activated partial thromboplastin time [PT/aPTT]) and factor VIII/IX assays were performed on-site more often than were mixing studies (laboratory coagulation studies, 93%; factor VIII/IX assays, 63%; mixing studies, 22%). Most commonly consulted sources of additional information were colleagues within one's own specialty (68%) and other specialists (67%). Most respondents had consulted with a hematologist (78%; least, Germany [56%]; greatest, UK [98%]). The most common reason for not consulting was hematologist unavailability (44%). The most commonly reported thresholds for concern with PT and aPTT were 13 to 20 seconds (36%) and 30 to 45 seconds (50%), respectively. Most respondents reported having discovered an underlying bleeding disorder (58%; least, Japan [35%]; greatest, Spain [74%]). Global survey results highlight similarities and differences between countries in how PPH is assessed and varying levels of obstetrician-gynecologist experience with identification of underlying

  13. Refining Grasp Affordance Models by Experience

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Buch, Anders Glent;

    2010-01-01

    We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stabl...

  14. Bicycle Rider Control: Observations, Modeling & Experiments

    NARCIS (Netherlands)

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby dev

  15. Finds in Testing Experiments for Model Evaluation

    Institute of Scientific and Technical Information of China (English)

    WU Ji; JIA Xiaoxia; LIU Chang; YANG Haiyan; LIU Chao

    2005-01-01

    To evaluate the fault location and the failure prediction models, simulation-based and code-based experiments were conducted to collect the required failure data. The PIE model was applied to simulate failures in the simulation-based experiment. Based on syntax and semantic level fault injections, a hybrid fault injection model is presented. To analyze the injected faults, the difficulty to inject (DTI) and difficulty to detect (DTD) are introduced and are measured from the programs used in the code-based experiment. Three interesting results were obtained from the experiments: 1) Failures simulated by the PIE model without consideration of the program and testing features are unreliably predicted; 2) There is no obvious correlation between the DTI and DTD parameters; 3) The DTD for syntax level faults changes in a different pattern to that for semantic level faults when the DTI increases. The results show that the parameters have a strong effect on the failures simulated, and the measurement of DTD is not strict.

  16. A Quantitative Causal Model Theory of Conditional Reasoning

    Science.gov (United States)

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  17. Towards the quantitative evaluation of visual attention models.

    Science.gov (United States)

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.

  18. Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)

    Science.gov (United States)

    Sapiano, M. R.

    2010-12-01

    Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.

  19. Quantitative Methods for Comparing Different Polyline Stream Network Models

    Energy Technology Data Exchange (ETDEWEB)

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  20. Digital clocks: simple Boolean models can quantitatively describe circadian systems.

    Science.gov (United States)

    Akman, Ozgur E; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J; Ghazal, Peter

    2012-09-07

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day-night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we anticipate

  1. Clinical experience of rehabilitation therapists with chronic diseases: a quantitative approach.

    NARCIS (Netherlands)

    Rijken, P.M.; Dekker, J.

    1998-01-01

    Objectives: To provide an overview of the numbers of patients with selected chronic diseases treated by rehabilitation therapists (physical therapists, occupational therapists, exercise therapists and podiatrists). The study was performed to get quantitative information on the degree to which rehabi

  2. Clinical experience of rehabilitation therapists with chronic diseases: a quantitative approach.

    NARCIS (Netherlands)

    Rijken, P.M.; Dekker, J.

    1998-01-01

    Objectives: To provide an overview of the numbers of patients with selected chronic diseases treated by rehabilitation therapists (physical therapists, occupational therapists, exercise therapists and podiatrists). The study was performed to get quantitative information on the degree to which

  3. Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics

    CERN Document Server

    Scheuerer, Michael

    2013-01-01

    Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...

  4. Quantitative modeling of degree-degree correlation in complex networks

    CERN Document Server

    Niño, Alfonso

    2013-01-01

    This paper presents an approach to the modeling of degree-degree correlation in complex networks. Thus, a simple function, \\Delta(k', k), describing specific degree-to- degree correlations is considered. The function is well suited to graphically depict assortative and disassortative variations within networks. To quantify degree correlation variations, the joint probability distribution between nodes with arbitrary degrees, P(k', k), is used. Introduction of the end-degree probability function as a basic variable allows using group theory to derive mathematical models for P(k', k). In this form, an expression, representing a family of seven models, is constructed with the needed normalization conditions. Applied to \\Delta(k', k), this expression predicts a nonuniform distribution of degree correlation in networks, organized in two assortative and two disassortative zones. This structure is actually observed in a set of four modeled, technological, social, and biological networks. A regression study performed...

  5. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.;

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers....... This demonstrates that the cell model can be a useful tool for the design of effective lysosome-targeting drugs with minimal off-target interactions....

  6. Monte Carlo modeling of photon transport in buried bone tissue layer for quantitative Raman spectroscopy

    Science.gov (United States)

    Wilson, Robert H.; Dooley, Kathryn A.; Morris, Michael D.; Mycek, Mary-Ann

    2009-02-01

    Light-scattering spectroscopy has the potential to provide information about bone composition via a fiber-optic probe placed on the skin. In order to design efficient probes, one must understand the effect of all tissue layers on photon transport. To quantitatively understand the effect of overlying tissue layers on the detected bone Raman signal, a layered Monte Carlo model was modified for Raman scattering. The model incorporated the absorption and scattering properties of three overlying tissue layers (dermis, subdermis, muscle), as well as the underlying bone tissue. The attenuation of the collected bone Raman signal, predominantly due to elastic light scattering in the overlying tissue layers, affected the carbonate/phosphate (C/P) ratio by increasing the standard deviation of the computational result. Furthermore, the mean C/P ratio varied when the relative thicknesses of the layers were varied and the elastic scattering coefficient at the Raman scattering wavelength of carbonate was modeled to be different from that at the Raman scattering wavelength of phosphate. These results represent the first portion of a computational study designed to predict optimal probe geometry and help to analyze detected signal for Raman scattering experiments involving bone.

  7. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    Science.gov (United States)

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode

  9. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode

  10. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...

  11. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  12. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  13. The power of a good idea: quantitative modeling of the spread of ideas from epidemiological models

    CERN Document Server

    Bettencourt, L M A; Kaiser, D I; Castillo-Chavez, C; Bettencourt, Lu\\'{i}s M.A.; Cintr\\'{o}n-Arias, Ariel; Kaiser, David I.; Castillo-Ch\\'{a}vez, Carlos

    2005-01-01

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the thr...

  14. Using ISOS consensus test protocols for development of quantitative life test models in ageing of organic solar cells

    DEFF Research Database (Denmark)

    Kettle, J.; Stoichkov, V.; Kumar, D.

    2017-01-01

    As Organic Photovoltaic (OPV) development matures, the demand grows for rapid characterisation of degradation and application of Quantitative Accelerated Life Tests (QALT) models to predict and improve reliability. To date, most accelerated testing on OPVs has been conducted using ISOS consensus...... standards. This paper identifies some of the problems in using and interpreting the results for predicting ageing based upon ISOS consensus standard test data. Design of Experiments (DOE) in conjunction with data from ISOS consensus standards are used as the basis for developing life test models for OPV...

  15. Process of quantitative evaluation of validity of rock cutting model

    Directory of Open Access Journals (Sweden)

    Jozef Futó

    2012-12-01

    Full Text Available Most of complex technical systems, including the rock cutting process, are very difficult to describe mathematically due to limitedhuman recognition abilities depending on achieved state in natural sciences and technology. A confrontation between the conception(model and the real system often arises in the investigation ofrock cutting process. Identification represents determinationof the systembased on its input and output in specified system class in a manner to obtain the determined system equivalent to the exploredsystem. Incase of rock cutting, the qualities of the model derived from aconventional energy theory ofrock cutting are compared to thequalitiesof non-standard models obtained byscanning of the acoustic signal as an accompanying effect of the surroundings in the rock cuttingprocess by calculated characteristics ofthe acoustic signal. The paper focuses on optimization using the specific cutting energy andpossibility of optimization using the accompanying acoustic signal, namely by one of itscharacteristics, i.e. volume of totalsignal Mrepresenting the result of the system identification.

  16. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    Science.gov (United States)

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  17. Exploiting linkage disequilibrium in statistical modelling in quantitative genomics

    DEFF Research Database (Denmark)

    Wang, Lei

    Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...

  18. Quantitative phase-field modeling for wetting phenomena.

    Science.gov (United States)

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.

  19. A quantitative magnetospheric model derived from spacecraft magnetometer data

    Science.gov (United States)

    Mead, G. D.; Fairfield, D. H.

    1975-01-01

    The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.

  20. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    Science.gov (United States)

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  1. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for addr

  2. New physical model design for Vapex experiments

    Energy Technology Data Exchange (ETDEWEB)

    Yazdani, A.; Maini, B.B. [Calgary Univ., AB (Canada)

    2004-07-01

    Solvent extraction is gaining much attention as an in-situ recovery method for difficult to produce heavy oil and tar sand deposits. Vapour extraction (VAPEX) is similar to the steam assisted gravity drainage (SAGD) process used in heavy oil production. In VAPEX, vaporized solvents are used instead of high temperature steam and the viscosity of the oil is reduced in situ. VAPEX is well suited for formations that are thin and where heat losses are unavoidable. It can be applied in the presence of overlying gas caps; bottom water aquifers; low thermal conductivity; high water saturation; clay swelling; and, formation damage. Modelling studies that use rectangular shaped models are limited at high reservoir pressures. This study presents a new design of physical models that overcomes this limitation. The annular space between two cylindrical pipes is used for developing slice-type and sand-filled models. This newly developed model is more compatible with high pressure. This paper compares results of VAPEX experiments using the cylindrical models and the rectangular models. The stabilized drainage rates from the newly developed cylindrical models are in very good agreement with those from the rectangular models. 16 refs., 3 tabs., 11 figs.

  3. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    Science.gov (United States)

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.

  4. Modeling variability in porescale multiphase flow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  5. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  6. Case studies in quantitative biology: Biochemistry on a leash and a single-molecule Hershey-Chase experiment

    Science.gov (United States)

    Van Valen, David

    2011-12-01

    The last 50 years of biological research has seen a marked increase in the amount of quantitative data that describes living systems. This wealth of data provides a unique opportunity to recast the pictorial level descriptions of biological processes in the language of mathematics, with the hope that such an undertaking will lead to deeper insights into the behavior of living systems. To achieve this end, we have undertaken three case studies in physical biology. In the first case study, we used statistical mechanics and polymer physics to construct a simple model that describes how flexible chains of amino acids, referred to as tethers, influence the information processing properties of signaling proteins. In the second case study, we studied the DNA ejection process of phage lambda in vitro. In particular, we used bulk and single-molecule methods to study the control parameters that govern the force and kinematics of the ejection process in vitro. In the last case study, we studied the DNA ejection process of phage lambda in vivo. We developed an assay that allows real-time monitoring of DNA ejection in vivo at the single-molecule level. We also developed a parallel system that allows the simultaneous visualization of both phage capsids and phage DNA at the single-cell level, constituting a true single-molecule Hershey-Chase experiment. The work described in this thesis outlines new tools, both in theory and experiment, that can be used to study biological systems as well as a paradigm that can be employed to mathematicize the cartoons of biology.

  7. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    Science.gov (United States)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  8. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    Science.gov (United States)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  9. Quantitative comparisons of satellite observations and cloud models

    Science.gov (United States)

    Wang, Fang

    Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46.9% in the one month 1D-Var retrievals examined. To attain better constrained cloud/rain ratios and improved retrieval quality, this study suggests the implementation of higher microwave frequency channels in the 1D-Var algorithm. Cloud Resolving Models (CRMs) offer an important pathway to interpret satellite observations of microphysical properties of storms. High frequency microwave brightness temperatures (Tbs) respond to precipitating-sized ice particles and can, therefore, be compared with simulated Tbs at the same frequencies. By clustering the Tb vectors at these frequencies, the scene can be classified into distinct microphysical regimes, in other words, cloud types. The properties for each cloud type in the simulated scene are compared to those in the observation scene to identify the discrepancies in microphysics within that cloud type. A convective storm over the Amazon observed by the Tropical Rainfall Measuring Mission (TRMM) is simulated using the Regional Atmospheric Modeling System (RAMS) in a semi-ideal setting, and four regimes are defined within the scene using cluster analysis: the 'clear sky/thin cirrus' cluster, the 'cloudy' cluster, the 'stratiform anvil' cluster and the 'convective' cluster. The relationship between Tb difference of 37 and 85 GHz and Tb at 85 GHz is found to contain important information of microphysical properties such as hydrometeor species and size distributions. Cluster

  10. Interdiffusion of the aluminum magnesium system. Quantitative analysis and numerical model; Interdiffusion des Aluminium-Magnesium-Systems. Quantitative Analyse und numerische Modellierung

    Energy Technology Data Exchange (ETDEWEB)

    Seperant, Florian

    2012-03-21

    Aluminum coatings are a promising approach to protect magnesium alloys against corrosion and thereby making them accessible to a variety of technical applications. Thermal treatment enhances the adhesion of the aluminium coating on magnesium by interdiffusion. For a deeper understanding of the diffusion process at the interface, a quantitative description of the Al-Mg system is necessary. On the basis of diffusion experiments with infinite reservoirs of aluminum and magnesium, the interdiffusion coefficients of the intermetallic phases of the Al-Mg-system are calculated with the Sauer-Freise method for the first time. To solve contradictions in the literature concerning the intrinsic diffusion coefficients, the possibility of a bifurcation of the Kirkendall plane is considered. Furthermore, a physico-chemical description of interdiffusion is provided to interpret the observed phase transitions. The developed numerical model is based on a temporally varied discretization of the space coordinate. It exhibits excellent quantitative agreement with the experimentally measured concentration profile. This confirms the validity of the obtained diffusion coefficients. Moreover, the Kirkendall shift in the Al-Mg system is simulated for the first time. Systems with thin aluminum coatings on magnesium also exhibit a good correlation between simulated and experimental concentration profiles. Thus, the diffusion coefficients are also valid for Al-coated systems. Hence, it is possible to derive parameters for a thermal treatment by simulation, resulting in an optimized modification of the magnesium surface for technical applications.

  11. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  12. Quantitative Risk Modeling of Fire on the International Space Station

    Science.gov (United States)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  13. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    Directory of Open Access Journals (Sweden)

    Bradley J Beattie

    Full Text Available There has been recent and growing interest in applying Cerenkov radiation (CR for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  14. Software applications toward quantitative metabolic flux analysis and modeling.

    Science.gov (United States)

    Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan

    2014-01-01

    Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.

  15. Background modeling for the GERDA experiment

    CERN Document Server

    Becerici-Schmidt, N

    2013-01-01

    The neutrinoless double beta (0nubb) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0nubb decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qbb come from Bi-214, Th-228, K-42, Co-60 and alpha emitting isotopes in the Ra-226 decay chain, with a fraction depending on the assumed source positions.

  16. Data Assimilation and Model Evaluation Experiment Datasets.

    Science.gov (United States)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  17. Can't Count or Won't Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment.

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2016-06-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through 'doing' quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a 'magic bullet' and that a wider programme of content and assessment diversification across the curriculum is preferential.

  18. Can’t Count or Won’t Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2015-01-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through ‘doing’ quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a ‘magic bullet’ and that a wider programme of content and assessment diversification across the curriculum is preferential. PMID:27330225

  19. Spine curve modeling for quantitative analysis of spinal curvature.

    Science.gov (United States)

    Hay, Ori; Hershkovitz, Israel; Rivlin, Ehud

    2009-01-01

    Spine curvature and posture are important to sustain healthy back. Incorrect spine configuration can add strain to muscles and put stress on the spine, leading to low back pain (LBP). We propose new method for analyzing spine curvature in 3D, using CT imaging. The proposed method is based on two novel concepts: the spine curvature is derived from spinal canal centerline, and evaluation of the curve is carried out against a model based on healthy individuals. We show results of curvature analysis of healthy population, pathological (scoliosis) patients, and patients having nonspecific chronic LBP.

  20. Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology

    DEFF Research Database (Denmark)

    Schoof, Erwin; Erler, Janine

    phosphorylation dynamics in a given biological sample. In Chapter III, we move into Integrative Network Biology, where, by combining two fundamental technologies (MS & NGS), we can obtain more in-depth insights into the links between cellular phenotype and genotype. Article 4 describes the proof...... cancer networks using Network Biology. Technologies key to this, such as Mass Spectrometry (MS), Next-Generation Sequencing (NGS) and High-Content Screening (HCS) are briefly described. In Chapter II, we cover how signaling networks and mutational data can be modeled in order to gain a better...

  1. How plants manage food reserves at night: quantitative models and open questions

    Directory of Open Access Journals (Sweden)

    Antonio eScialdone

    2015-03-01

    Full Text Available In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources.

  2. Quantitative Model of microRNA-mRNA interaction

    Science.gov (United States)

    Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj

    2012-02-01

    MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.

  3. Quantitative Genetics and Functional-Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    CERN Document Server

    Letort, Veronique; Cournède, Paul-Henry; De Reffye, Philippe; Courtois, Brigitte; 10.1093/aob/mcm197

    2010-01-01

    Background and Aims: Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional-structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods: The GreenLab model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings ...

  4. A quantitative and dynamic model for plant stem cell regulation.

    Directory of Open Access Journals (Sweden)

    Florian Geier

    Full Text Available Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.

  5. Application of non-quantitative modelling in the analysis of a network warfare environment

    CSIR Research Space (South Africa)

    Veerasamy, N

    2008-07-01

    Full Text Available of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modelling is a useful method to better characterize the field due to the rich ideas that can be generated...

  6. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  7. Quantitative hardware prediction modeling for hardware/software co-design

    NARCIS (Netherlands)

    Meeuws, R.J.

    2012-01-01

    Hardware estimation is an important factor in Hardware/Software Co-design. In this dissertation, we present the Quipu Modeling Approach, a high-level quantitative prediction model for HW/SW Partitioning using statistical methods. Our approach uses linear regression between software complexity metric

  8. Evaluation of a quantitative phosphorus transport model for potential improvement of southern phosphorus indices

    Science.gov (United States)

    Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...

  9. Development of probabilistic models for quantitative pathway analysis of plant pests introduction for the EU territory

    NARCIS (Netherlands)

    Douma, J.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Roques, A.; Werf, van der W.

    2015-01-01

    The aim of this report is to provide EFSA with probabilistic models for quantitative pathway analysis of plant pest introduction for the EU territory through non-edible plant products or plants. We first provide a conceptualization of two types of pathway models. The individual based PM simulates an

  10. Ballistic Response of Fabrics: Model and Experiments

    Science.gov (United States)

    Orphal, Dennis L.; Walker Anderson, James D., Jr.

    2001-06-01

    Walker (1999)developed an analytical model for the dynamic response of fabrics to ballistic impact. From this model the force, F, applied to the projectile by the fabric is derived to be F = 8/9 (ET*)h^3/R^2, where E is the Young's modulus of the fabric, T* is the "effective thickness" of the fabric and equal to the ratio of the areal density of the fabric to the fiber density, h is the displacement of the fabric on the axis of impact and R is the radius of the fabric deformation or "bulge". Ballistic tests against Zylon^TM fabric have been performed to measure h and R as a function of time. The results of these experiments are presented and analyzed in the context of the Walker model. Walker (1999), Proceedings of the 18th International Symposium on Ballistics, pp. 1231.

  11. Some Experiences with Numerical Modelling of Overflows

    DEFF Research Database (Denmark)

    Larsen, Torben; Nielsen, L.; Jensen, B.

    2007-01-01

    Overflows are commonly applied in storm sewer systems to control flow and water surface level. Therefore overflows play a central role in the control of discharges of pollutants from sewer systems to the environment. The basic hydrodynamic principle of an overflow is the so-called critical flow...... across the edge of the overflow. To ensure critical flow across the edge, the upstream flow must be subcritical whereas the downstream flow is either supercritical or a free jet. Experimentally overflows are well studied. Based on laboratory experiments and Froude number scaling, numerous accurate...... the term for curvature of the water surface (the so-called Boussinesq approximation) 2. 2- and 3-dimensional so-called Volume of Fluid Models (VOF-models) based on the full Navier-Stokes equations (named NS3 and developed by DHI Water & Environment) As a general conclusion, the two numerical models show...

  12. Quantitative modeling of Escherichia coli chemotactic motion in environments varying in space and time.

    Directory of Open Access Journals (Sweden)

    Lili Jiang

    2010-04-01

    Full Text Available Escherichia coli chemotactic motion in spatiotemporally varying environments is studied by using a computational model based on a coarse-grained description of the intracellular signaling pathway dynamics. We find that the cell's chemotaxis drift velocity v(d is a constant in an exponential attractant concentration gradient [L] proportional, variantexp(Gx. v(d depends linearly on the exponential gradient G before it saturates when G is larger than a critical value G(C. We find that G(C is determined by the intracellular adaptation rate k(R with a simple scaling law: G(C infinity k(1/2(R. The linear dependence of v(d on G = d(ln[L]/dx directly demonstrates E. coli's ability in sensing the derivative of the logarithmic attractant concentration. The existence of the limiting gradient G(C and its scaling with k(R are explained by the underlying intracellular adaptation dynamics and the flagellar motor response characteristics. For individual cells, we find that the overall average run length in an exponential gradient is longer than that in a homogeneous environment, which is caused by the constant kinase activity shift (decrease. The forward runs (up the gradient are longer than the backward runs, as expected; and depending on the exact gradient, the (shorter backward runs can be comparable to runs in a spatially homogeneous environment, consistent with previous experiments. In (spatial ligand gradients that also vary in time, the chemotaxis motion is damped as the frequency omega of the time-varying spatial gradient becomes faster than a critical value omega(c, which is controlled by the cell's chemotaxis adaptation rate k(R. Finally, our model, with no adjustable parameters, agrees quantitatively with the classical capillary assay experiments where the attractant concentration changes both in space and time. Our model can thus be used to study E. coli chemotaxis behavior in arbitrary spatiotemporally varying environments. Further experiments are

  13. High-response piezoelectricity modeled quantitatively near a phase boundary

    Science.gov (United States)

    Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.

    2017-01-01

    Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.

  14. A quantitative confidence signal detection model: 1. Fitting psychometric functions.

    Science.gov (United States)

    Yi, Yongwoo; Merfeld, Daniel M

    2016-04-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. Copyright © 2016 the American Physiological Society.

  15. Toward a quantitative model of metamorphic nucleation and growth

    Science.gov (United States)

    Gaidies, F.; Pattison, D. R. M.; de Capitani, C.

    2011-11-01

    The formation of metamorphic garnet during isobaric heating is simulated on the basis of the classical nucleation and reaction rate theories and Gibbs free energy dissipation in a multi-component model system. The relative influences are studied of interfacial energy, chemical mobility at the surface of garnet clusters, heating rate and pressure on interface-controlled garnet nucleation and growth kinetics. It is found that the interfacial energy controls the departure from equilibrium required to nucleate garnet if attachment and detachment processes at the surface of garnet limit the overall crystallization rate. The interfacial energy for nucleation of garnet in a metapelite of the aureole of the Nelson Batholith, BC, is estimated to range between 0.03 and 0.3 J/m2 at a pressure of ca. 3,500 bar. This corresponds to a thermal overstep of the garnet-forming reaction of ca. 30°C. The influence of the heating rate on thermal overstepping is negligible. A significant feedback is predicted between chemical fractionation associated with garnet formation and the kinetics of nucleation and crystal growth of garnet giving rise to its lognormal—shaped crystal size distribution.

  16. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  17. The Geodynamo: Models and supporting experiments

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, U.; Stieglitz, R.

    2003-03-01

    The magnetic field is a characteristic feature of our planet Earth. It shelters the biosphere against particle radiation from the space and offers by its direction orientation to creatures. The question about its origin has challenged scientists to find sound explanations. Major progress has been achieved during the last two decades in developing dynamo models and performing corroborating laboratory experiments to explain convincingly the principle of the Earth magnetic field. The article reports some significant steps towards our present understanding of this subject and outlines in particular relevant experiments, which either substantiate crucial elements of self-excitation of magnetic fields or demonstrate dynamo action completely. The authors are aware that they have not addressed all aspects of geomagnetic studies; rather, they have selected the material from the huge amount of literature such as to motivate the recently growing interest in experimental dynamo research. (orig.)

  18. Argonne Bubble Experiment Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.

  19. Blast Loading Experiments of Surrogate Models for Tbi Scenarios

    Science.gov (United States)

    Alley, M. D.; Son, S. F.

    2009-12-01

    This study aims to characterize the interaction of explosive blast waves through simulated anatomical models. We have developed physical models and a systematic approach for testing traumatic brain injury (TBI) mechanisms and occurrences. A simplified series of models consisting of spherical PMMA shells housing synthetic gelatins as brain simulants have been utilized. A series of experiments was conducted to compare the sensitivity of the system response to mechanical properties of the simulants under high strain-rate explosive blasts. Small explosive charges were directed at the models to produce a realistic blast wave in a scaled laboratory test cell setting. Blast profiles were measured and analyzed to compare system response severity. High-speed shadowgraph imaging captured blast wave interaction with the head model while particle tracking captured internal response for displacement and strain correlation. The results suggest amplification of shock waves inside the head near material interfaces due to impedance mismatches. In addition, significant relative displacement was observed between the interacting materials suggesting large strain values of nearly 5%. Further quantitative results were obtained through shadowgraph imaging of the blasts confirming a separation of time scales between blast interaction and bulk movement. These results lead to the conclusion that primary blast effects could cause TBI occurrences.

  20. Vulnerability of Russian regions to natural risk: experience of quantitative assessment

    OpenAIRE

    Petrova, E.

    2006-01-01

    International audience; One of the important tracks leading to natural risk prevention, disaster mitigation or the reduction of losses due to natural hazards is the vulnerability assessment of an "at-risk" region. The majority of researchers propose to assess vulnerability according to an expert evaluation of several qualitative characteristics, scoring each of them usually using three ratings: low, average, and high. Unlike these investigations, we attempted a quantitative vulnerability asse...

  1. Experiments for foam model development and validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F. (Honeywell Federal Manufacturing and Technologies, Kansas City Plant, Kansas City, MO); Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  2. Experience with the CMS Event Data Model

    Energy Technology Data Exchange (ETDEWEB)

    Elmer, P.; /Princeton U.; Hegner, B.; /CERN; Sexton-Kennedy, L.; /Fermilab

    2009-06-01

    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.

  3. Multiaxial behavior of foams - Experiments and modeling

    Science.gov (United States)

    Maheo, Laurent; Guérard, Sandra; Rio, Gérard; Donnard, Adrien; Viot, Philippe

    2015-09-01

    Cellular materials are strongly related to pressure level inside the material. It is therefore important to use experiments which can highlight (i) the pressure-volume behavior, (ii) the shear-shape behavior for different pressure level. Authors propose to use hydrostatic compressive, shear and combined pressure-shear tests to determine cellular materials behavior. Finite Element Modeling must take into account these behavior specificities. Authors chose to use a behavior law with a Hyperelastic, a Viscous and a Hysteretic contributions. Specific developments has been performed on the Hyperelastic one by separating the spherical and the deviatoric part to take into account volume change and shape change characteristics of cellular materials.

  4. Quantitative Measurement of Cerebral Perfusion with Intravoxel Incoherent Motion in Acute Ischemia Stroke: Initial Clinical Experience

    Institute of Scientific and Technical Information of China (English)

    Li-Bao Hu; Nan Hong; Wen-Zhen Zhu

    2015-01-01

    Background:Intravoxel incoherent motion (IVIM) has the potential to provide both diffusion and perfusion information without an exogenous contrast agent,its application for the brain is promising,however,feasibility studies on this are relatively scarce.The aim of this study is to assess the feasibility of IVIM perfusion in patients with acute ischemic stroke (AIS).Methods:Patients with suspected AIS were examined by magnetic resonance imaging within 24 h of symptom onset.Fifteen patients (mean age was 68.7 ± 8.0 years) who underwent arterial spin labeling (ASL) and diffusion-weighted imaging (DWI) were identified as having AIS with ischemic penumbra were enrolled,where ischemic penumbra referred to the mismatch areas of ASL and DWI.Eleven different b-values were applied in the biexponential model.Regions of interest were selected in ischemic penumbras and contralateral normal brain regions.Fast apparent diffusion coefficients (ADCs) and ASL cerebral blood flow (CBF) were measured.The paired t-test was applied to compare ASL CBF,fast ADC,and slow ADC measurements between ischemic penumbras and contralateral normal brain regions.Linear regression and Pearson's correlation were used to evaluate the correlations among quantitative results.Results:The fast ADCs and ASL CBFs of ischemic penumbras were significantly lower than those of the contralateral normal brain regions (1.93 ± 0.78 μm2/ms vs.3.97 ± 2.49 μm2/ms,P =0.007;13.5 ± 4.5 ml· 100 g-1 ·min-1 vs.29.1 ± 12.7 ml·100 g-1 ·min-1,P < 0.001,respectively).No significant difference was observed in slow ADCs between ischemic penumbras and contralateral normal brain regions (0.203 ± 0.090 μm2/ms vs.0.198 ± 0.100 μm2/ms,P =0.451).Compared with contralateral normal brain regions,both CBFs and fast ADCs decreased in ischemic penumbras while slow ADCs remained the same.A significant correlation was detected between fast ADCs and ASL CBFs (r =0.416,P < 0.05).No statistically significant correlation was

  5. A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes

    Science.gov (United States)

    Olsen, Seth

    2012-04-01

    We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.

  6. A Quantitative bgl Operon Model for E. coli Requires BglF Conformational Change for Sugar Transport

    Science.gov (United States)

    Chopra, Paras; Bender, Andreas

    The bgl operon is responsible for the metabolism of β-glucoside sugars such as salicin or arbutin in E. coli. Its regulatory system involves both positive and negative feedback mechanisms and it can be assumed to be more complex than that of the more closely studied lac and trp operons. We have developed a quantitative model for the regulation of the bgl operon which is subject to in silico experiments investigating its behavior under different hypothetical conditions. Upon administration of 5mM salicin as an inducer our model shows 80-fold induction, which compares well with the 60-fold induction measured experimentally. Under practical conditions 5-10mM inducer are employed, which is in line with the minimum inducer concentration of 1mM required by our model. The necessity of BglF conformational change for sugar transport has been hypothesized previously, and in line with those hypotheses our model shows only minor induction if conformational change is not allowed. Overall, this first quantitative model for the bgl operon gives reasonable predictions that are close to experimental results (where measured). It will be further refined as values of the parameters are determined experimentally. The model was developed in Systems Biology Markup Language (SBML) and it is available from the authors and from the Biomodels repository [www.ebi.ac.uk/biomodels].

  7. Forces between permanent magnets: experiments and model

    Science.gov (United States)

    González, Manuel I.

    2017-03-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.

  8. Tough and tunable adhesion of hydrogels: experiments and models

    Science.gov (United States)

    Zhang, Teng; Yuk, Hyunwoo; Lin, Shaoting; Parada, German A.; Zhao, Xuanhe

    2017-06-01

    As polymer networks infiltrated with water, hydrogels are major constituents of animal and plant bodies and have diverse engineering applications. While natural hydrogels can robustly adhere to other biological materials, such as bonding of tendons and cartilage on bones and adhesive plaques of mussels, it is challenging to achieve such tough adhesions between synthetic hydrogels and engineering materials. Recent experiments show that chemically anchoring long-chain polymer networks of tough synthetic hydrogels on solid surfaces create adhesions tougher than their natural counterparts, but the underlying mechanism has not been well understood. It is also challenging to tune systematically the adhesion of hydrogels on solids. Here, we provide a quantitative understanding of the mechanism for tough adhesions of hydrogels on solid materials via a combination of experiments, theory, and numerical simulations. Using a coupled cohesive-zone and Mullins-effect model validated by experiments, we reveal the interplays of intrinsic work of adhesion, interfacial strength, and energy dissipation in bulk hydrogels in order to achieve tough adhesions. We further show that hydrogel adhesion can be systematically tuned by tailoring the hydrogel geometry and silanization time of solid substrates, corresponding to the control of energy dissipation zone and intrinsic work of adhesion, respectively. The current work further provides a theoretical foundation for rational design of future biocompatible and underwater adhesives.

  9. A quantitative model of human DNA base excision repair. I. mechanistic insights

    OpenAIRE

    Sokhansanj, Bahrad A.; Rodrigue, Garry R.; Fitch, J. Patrick; David M Wilson

    2002-01-01

    Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts conside...

  10. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    Science.gov (United States)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  11. Modeling of Carbon Migration During JET Injection Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Strachan, J. D.; Likonen, J.; Coad, P.; Rubel, M.; Widdowson, A.; Airila, M.; Andrew, P.; Brezinsek, S.; Corrigan, G.; Esser, H. G.; Jachmich, S.; Kallenbach, A.; Kirschner, A.; Kreter, A.; Matthews, G. F.; Philipps, V.; Pitts, R. A.; Spence, J.; Stamp, M.; Wiesen, S.

    2008-10-15

    JET has performed two dedicated carbon migration experiments on the final run day of separate campaigns (2001 and 2004) using {sup 13}CH{sub 4} methane injected into repeated discharges. The EDGE2D/NIMBUS code modelled the carbon migration in both experiments. This paper describes this modelling and identifies a number of important migration pathways: (1) deposition and erosion near the injection location, (2) migration through the main chamber SOL, (3) migration through the private flux region aided by E x B drifts, and (4) neutral migration originating near the strike points. In H-Mode, type I ELMs are calculated to influence the migration by enhancing erosion during the ELM peak and increasing the long-range migration immediately following the ELM. The erosion/re-deposition cycle along the outer target leads to a multistep migration of {sup 13}C towards the separatrix which is called 'walking'. This walking created carbon neutrals at the outer strike point and led to {sup 13}C deposition in the private flux region. Although several migration pathways have been identified, quantitative analyses are hindered by experimental uncertainty in divertor leakage, and the lack of measurements at locations such as gaps and shadowed regions.

  12. Rotating, hydromagnetic laboratory experiment modelling planetary cores

    Science.gov (United States)

    Kelley, Douglas H.

    2009-10-01

    This dissertation describes a series of laboratory experiments motivated by planetary cores and the dynamo effect, the mechanism by which the flow of an electrically conductive fluid can give rise to a spontaneous magnetic field. Our experimental apparatus, meant to be a laboratory model of Earth's core, contains liquid sodium between an inner, solid sphere and an outer, spherical shell. The fluid is driven by the differential rotation of these two boundaries, each of which is connected to a motor. Applying an axial, DC magnetic field, we use a collection of Hall probes to measure the magnetic induction that results from interactions between the applied field and the flowing, conductive fluid. We have observed and identified inertial modes, which are bulk oscillations of the fluid restored by the Coriolis force. Over-reflection at a shear layer is one mechanism capable of exciting such modes, and we have developed predictions of both onset boundaries and mode selection from over-reflection theory which are consistent with our observations. Also, motivated by previous experimental devices that used ferromagnetic boundaries to achieve dynamo action, we have studied the effects of a soft iron (ferromagnetic) inner sphere on our apparatus, again finding inertial waves. We also find that all behaviors are more broadband and generally more nonlinear in the presence of a ferromagnetic boundary. Our results with a soft iron inner sphere have implications for other hydromagnetic experiments with ferromagnetic boundaries, and are appropriate for comparison to numerical simulations as well. From our observations we conclude that inertial modes almost certainly occur in planetary cores and will occur in future rotating experiments. In fact, the predominance of inertial modes in our experiments and in other recent work leads to a new paradigm for rotating turbulence, starkly different from turbulence theories based on assumptions of isotropy and homogeneity, starting instead

  13. Ecology-Centered Experiences among Children and Adolescents: A Qualitative and Quantitative Analysis

    Science.gov (United States)

    Orton, Judy

    2013-01-01

    The present research involved two studies that considered "ecology-centered experiences" (i.e., experiences with living things) as a factor in children's environmental attitudes and behaviors and adolescents' ecological understanding. The first study (Study 1) examined how a community garden provides children in an urban setting the…

  14. Ecology-Centered Experiences among Children and Adolescents: A Qualitative and Quantitative Analysis

    Science.gov (United States)

    Orton, Judy

    2013-01-01

    The present research involved two studies that considered "ecology-centered experiences" (i.e., experiences with living things) as a factor in children's environmental attitudes and behaviors and adolescents' ecological understanding. The first study (Study 1) examined how a community garden provides children in an urban setting the…

  15. Quantitative photoacoustic tomography using forward and adjoint Monte Carlo models of radiance

    CERN Document Server

    Hochuli, Roman; Arridge, Simon; Cox, Ben

    2016-01-01

    Forward and adjoint Monte Carlo (MC) models of radiance are proposed for use in model-based quantitative photoacoustic tomography. A 2D radiance MC model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media. A gradient-based optimisation scheme is then used to recover 2D absorption and scattering coefficients distributions from simulated photoacoustic measurements. It is shown that the functional gradients, which are a challenge to compute efficiently using MC models, can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models. This work establishes a framework for transport-based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures.

  16. Development of life story experience (LSE) scales for migrant dentists in Australia: a sequential qualitative-quantitative study.

    Science.gov (United States)

    Balasubramanian, M; Spencer, A J; Short, S D; Watkins, K; Chrisopoulos, S; Brennan, D S

    2016-09-01

    The integration of qualitative and quantitative approaches introduces new avenues to bridge strengths, and address weaknesses of both methods. To develop measure(s) for migrant dentist experiences in Australia through a mixed methods approach. The sequential qualitative-quantitative design involved first the harvesting of data items from qualitative study, followed by a national survey of migrant dentists in Australia. Statements representing unique experiences in migrant dentists' life stories were deployed the survey questionnaire, using a five-point Likert scale. Factor analysis was used to examine component factors. Eighty-two statements from 51 participants were harvested from the qualitative analysis. A total of 1,022 of 1,977 migrant dentists (response rate 54.5%) returned completed questionnaires. Factor analysis supported an initial eight-factor solution; further scale development and reliability analysis led to five scales with a final list of 38 life story experience (LSE) items. Three scales were based on home country events: health system and general lifestyle concerns (LSE1; 10 items), society and culture (LSE4; 4 items) and career development (LSE5; 4 items). Two scales included migrant experiences in Australia: appreciation towards Australian way of life (LSE2; 13 items) and settlement concerns (LSE3; 7 items). The five life story experience scales provided necessary conceptual clarity and empirical grounding to explore migrant dentist experiences in Australia. Being based on original migrant dentist narrations, these scales have the potential to offer in-depth insights for policy makers and support future research on dentist migration.

  17. Liquid Chromatographic Determination of Nitroanilines: An Experiment for the Quantitative Analysis Laboratory.

    Science.gov (United States)

    Cantwell, Frederick F.; Brown, David W.

    1981-01-01

    Describes a three-hour liquid chromatography experiment involving rapid separation of colored compounds in glass columns packed with a nonpolar absorbent. Includes apparatus design, sample preparation, experimental procedures, and advantages for this determination. (SK)

  18. A Novel Quantitative Analysis Model for Information System Survivability Based on Conflict Analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Huiqiang; ZHAO Guosheng

    2007-01-01

    This paper describes a novel quantitative analysis model for system survivability based on conflict analysis, which provides a direct-viewing survivable situation. Based on the three-dimensional state space of conflict, each player's efficiency matrix on its credible motion set can be obtained. The player whose desire is the strongest in all initiates the moving and the overall state transition matrix of information system may be achieved. In addition, the process of modeling and stability analysis of conflict can be converted into a Markov analysis process, thus the obtained results with occurring probability of each feasible situation will help the players to quantitatively judge the probability of their pursuing situations in conflict. Compared with the existing methods which are limited to post-explanation of system's survivable situation, the proposed model is relatively suitable for quantitatively analyzing and forecasting the future development situation of system survivability. The experimental results show that the model may be effectively applied to quantitative analysis for survivability. Moreover, there will be a good application prospect in practice.

  19. Quantitative genetics model as the unifying model for defining genomic relationship and inbreeding coefficient.

    Science.gov (United States)

    Wang, Chunkao; Da, Yang

    2014-01-01

    The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample.

  20. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  1. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  2. Full-Scale Cookoff Model Validation Experiments

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  3. Multi-objective intelligent coordinating optimization blending system based on qualitative and quantitative synthetic model

    Institute of Scientific and Technical Information of China (English)

    WANG Ya-lin; MA Jie; GUI Wei-hua; YANG Chun-hua; ZHANG Chuan-fu

    2006-01-01

    A multi-objective intelligent coordinating optimization strategy based on qualitative and quantitative synthetic model for Pb-Zn sintering blending process was proposed to obtain optimal mixture ratio. The mechanism and neural network quantitative models for predicting compositions and rule models for expert reasoning were constructed based on statistical data and empirical knowledge. An expert reasoning method based on these models were proposed to solve blending optimization problem, including multi-objective optimization for the first blending process and area optimization for the second blending process, and to determine optimal mixture ratio which will meet the requirement of intelligent coordination. The results show that the qualified rates of agglomerate Pb, Zn and S compositions are increased by 7.1%, 6.5% and 6.9%, respectively, and the fluctuation of sintering permeability is reduced by 7.0 %, which effectively stabilizes the agglomerate compositions and the permeability.

  4. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling.

    Science.gov (United States)

    Dick, Daniel G; Maxwell, Erin E

    2015-07-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'.

  5. Nanofluid Drop Evaporation: Experiment, Theory, and Modeling

    Science.gov (United States)

    Gerken, William James

    Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. Experiments were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A model was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. Model predictions are in relatively good agreement with experiment, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via modeling. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present modeling predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J

  6. Quantitative Verification of a Force-based Model for Pedestrian Dynamics

    CERN Document Server

    Chraibi, Mohcine; Schadschneider, Andreas; Mackens, Wolfgang

    2009-01-01

    This paper introduces a spatially continuous force-based model for simulating pedestrian dynamics. The main intention of this work is the quantitative description of pedestrian movement through bottlenecks and in corridors. Measurements of flow and density at bottlenecks will be presented and compared with empirical data. Furthermore the fundamental diagram for the movement in a corridor is reproduced. The results of the proposed model show a good agreement with empirical data.

  7. ITER transient consequences for material damage: modelling versus experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bazylev, B [Forschungszentrum Karlsruhe, IHM, P O Box 3640, 76021 Karlsruhe (Germany); Janeschitz, G [Forschungszentrum Karlsruhe, Fusion, P O Box 3640, 76021 Karlsruhe (Germany); Landman, I [Forschungszentrum Karlsruhe, IHM, P O Box 3640, 76021 Karlsruhe (Germany); Pestchanyi, S [Forschungszentrum Karlsruhe, IHM, P O Box 3640, 76021 Karlsruhe (Germany); Loarte, A [EFDA Close Support Unit Garching, Boltmannstr 2, D-85748 Garching (Germany); Federici, G [ITER International Team, Garching Working Site, Boltmannstr 2, D-85748 Garching (Germany); Merola, M [ITER International Team, Garching Working Site, Boltmannstr 2, D-85748 Garching (Germany); Linke, J [Forschungszentrum Juelich, EURATOM-Association, D-52425 Juelich (Germany); Zhitlukhin, A [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation); Podkovyrov, V [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation); Klimov, N [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation); Safronov, V [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation)

    2007-03-15

    Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed.

  8. Gravimetric Analysis of Bismuth in Bismuth Subsalicylate Tablets: A Versatile Quantitative Experiment for Undergraduate Laboratories

    Science.gov (United States)

    Davis, Eric; Cheung, Ken; Pauls, Steve; Dick, Jonathan; Roth, Elijah; Zalewski, Nicole; Veldhuizen, Christopher; Coeler, Joel

    2015-01-01

    In this laboratory experiment, lower- and upper-division students dissolved bismuth subsalicylate tablets in acid and precipitated the resultant Bi[superscript 3+] in solution with sodium phosphate for a gravimetric determination of bismuth subsalicylate in the tablets. With a labeled concentration of 262 mg/tablet, the combined data from three…

  9. Quantitative Investigations of Biodiesel Fuel Using Infrared Spectroscopy: An Instrumental Analysis Experiment for Undergraduate Chemistry Students

    Science.gov (United States)

    Ault, Andrew P.; Pomeroy, Robert

    2012-01-01

    Biodiesel has gained attention in recent years as a renewable fuel source due to its reduced greenhouse gas and particulate emissions, and it can be produced within the United States. A laboratory experiment designed for students in an upper-division undergraduate laboratory is described to study biodiesel production and biodiesel mixing with…

  10. Quantitative Investigations of Biodiesel Fuel Using Infrared Spectroscopy: An Instrumental Analysis Experiment for Undergraduate Chemistry Students

    Science.gov (United States)

    Ault, Andrew P.; Pomeroy, Robert

    2012-01-01

    Biodiesel has gained attention in recent years as a renewable fuel source due to its reduced greenhouse gas and particulate emissions, and it can be produced within the United States. A laboratory experiment designed for students in an upper-division undergraduate laboratory is described to study biodiesel production and biodiesel mixing with…

  11. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  12. Review on modelling aspects in reversed-phase liquid chromatographic quantitative structure-retention relationships

    Energy Technology Data Exchange (ETDEWEB)

    Put, R. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium); Vander Heyden, Y. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium)], E-mail: yvanvdh@vub.ac.be

    2007-10-29

    In the literature an increasing interest in quantitative structure-retention relationships (QSRR) can be observed. After a short introduction on QSRR and other strategies proposed to deal with the starting point selection problem prior to method development in reversed-phase liquid chromatography, a number of interesting papers is reviewed, dealing with QSRR models for reversed-phase liquid chromatography. The main focus in this review paper is put on the different modelling methodologies applied and the molecular descriptors used in the QSRR approaches. Besides two semi-quantitative approaches (i.e. principal component analysis, and decision trees), these methodologies include artificial neural networks, partial least squares, uninformative variable elimination partial least squares, stochastic gradient boosting for tree-based models, random forests, genetic algorithms, multivariate adaptive regression splines, and two-step multivariate adaptive regression splines.

  13. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  14. The JBEI quantitative metabolic modeling library (jQMM): a python library for modeling microbial metabolism.

    Science.gov (United States)

    Birkel, Garrett W; Ghosh, Amit; Kumar, Vinay S; Weaver, Daniel; Ando, David; Backman, Tyler W H; Arkin, Adam P; Keasling, Jay D; Martín, Héctor García

    2017-04-05

    Modeling of microbial metabolism is a topic of growing importance in biotechnology. Mathematical modeling helps provide a mechanistic understanding for the studied process, separating the main drivers from the circumstantial ones, bounding the outcomes of experiments and guiding engineering approaches. Among different modeling schemes, the quantification of intracellular metabolic fluxes (i.e. the rate of each reaction in cellular metabolism) is of particular interest for metabolic engineering because it describes how carbon and energy flow throughout the cell. In addition to flux analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed. The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes and leveraging other -omics data for the scientific study of cellular metabolism and bioengineering purposes. Firstly, it presents a complete toolbox for simultaneously performing two different types of flux analysis that are typically disjoint: Flux Balance Analysis and (13)C Metabolic Flux Analysis. Moreover, it introduces the capability to use (13)C labeling experimental data to constrain comprehensive genome-scale models through a technique called two-scale (13)C Metabolic Flux Analysis (2S-(13)C MFA). In addition, the library includes a demonstration of a method that uses proteomics data to produce actionable insights to increase biofuel production. Finally, the use of the jQMM library is illustrated through the addition of several Jupyter notebook demonstration files that enhance reproducibility and provide the capability to be adapted to the user's specific needs. jQMM will facilitate the design and metabolic engineering of organisms for biofuels and other chemicals, as well as investigations of cellular metabolism and leveraging -omics data. As an open source software project, we hope it

  15. New Quantitative Structure-Activity Relationship Models Improve Predictability of Ames Mutagenicity for Aromatic Azo Compounds.

    Science.gov (United States)

    Manganelli, Serena; Benfenati, Emilio; Manganaro, Alberto; Kulkarni, Sunil; Barton-Maclaren, Tara S; Honma, Masamitsu

    2016-10-01

    Existing Quantitative Structure-Activity Relationship (QSAR) models have limited predictive capabilities for aromatic azo compounds. In this study, 2 new models were built to predict Ames mutagenicity of this class of compounds. The first one made use of descriptors based on simplified molecular input-line entry system (SMILES), calculated with the CORAL software. The second model was based on the k-nearest neighbors algorithm. The statistical quality of the predictions from single models was satisfactory. The performance further improved when the predictions from these models were combined. The prediction results from other QSAR models for mutagenicity were also evaluated. Most of the existing models were found to be good at finding toxic compounds but resulted in many false positive predictions. The 2 new models specific for this class of compounds avoid this problem thanks to a larger set of related compounds as training set and improved algorithms.

  16. Vulnerability of Russian regions to natural risk: experience of quantitative assessment

    Science.gov (United States)

    Petrova, E.

    2006-01-01

    One of the important tracks leading to natural risk prevention, disaster mitigation or the reduction of losses due to natural hazards is the vulnerability assessment of an "at-risk" region. The majority of researchers propose to assess vulnerability according to an expert evaluation of several qualitative characteristics, scoring each of them usually using three ratings: low, average, and high. Unlike these investigations, we attempted a quantitative vulnerability assessment using multidimensional statistical methods. Cluster analysis for all 89 Russian regions revealed five different types of region, which are characterized with a single (rarely two) prevailing factor causing increase of vulnerability. These factors are: the sensitivity of the technosphere to unfavorable influences; a "human factor"; a high volume of stored toxic waste that increases possibility of NDs with serious consequences; the low per capita GRP, which determine reduced prevention and protection costs; the heightened liability of regions to natural disasters that can be complicated due to unfavorable social processes. The proposed methods permitted us to find differences in prevailing risk factor (vulnerability factor) for the region types that helps to show in which direction risk management should focus on.

  17. Vulnerability of Russian regions to natural risk: experience of quantitative assessment

    Directory of Open Access Journals (Sweden)

    E. Petrova

    2006-01-01

    Full Text Available One of the important tracks leading to natural risk prevention, disaster mitigation or the reduction of losses due to natural hazards is the vulnerability assessment of an 'at-risk' region. The majority of researchers propose to assess vulnerability according to an expert evaluation of several qualitative characteristics, scoring each of them usually using three ratings: low, average, and high. Unlike these investigations, we attempted a quantitative vulnerability assessment using multidimensional statistical methods. Cluster analysis for all 89 Russian regions revealed five different types of region, which are characterized with a single (rarely two prevailing factor causing increase of vulnerability. These factors are: the sensitivity of the technosphere to unfavorable influences; a 'human factor'; a high volume of stored toxic waste that increases possibility of NDs with serious consequences; the low per capita GRP, which determine reduced prevention and protection costs; the heightened liability of regions to natural disasters that can be complicated due to unfavorable social processes. The proposed methods permitted us to find differences in prevailing risk factor (vulnerability factor for the region types that helps to show in which direction risk management should focus on.

  18. Infrequent near death experiences in severe brain injury survivors - A quantitative and qualitative study

    Directory of Open Access Journals (Sweden)

    Yongmei Hou

    2013-01-01

    Full Text Available Background: Near death experiences (NDE are receiving increasing attention by the scientific community because not only do they provide a glimpse of the complexity of the mind-brain interactions in ′near-death′ circumstances but also because they have significant and long lasting effects on various psychological aspects of the survivors. The over-all incidence-reports of NDEs in literature have varied widely from a modest Figure of 10% to around 35%, even up to an incredible Figure of 72% in persons who have faced close brush with death. Somewhat similar to this range of difference in incidences are the differences prevalent in the opinions that theorists and researchers harbor around the world for explaining this phenomena. None the less, objective evidences have supported physiological theories the most. A wide range of physiological processes have been targeted for explaining NDEs. These include cerebral anoxia, chemical alterations like hypercapnia, presence of endorphins, ketamine, and serotonin, or abnormal activity of the temporal lobe or the limbic system. In spite of the fact that the physiological theories of NDEs have revolved around the derangements in brain, no study till date has taken up the task of evaluating the experiences of near-death in patients where specific injury has been to brain. Most of them have evaluated NDEs in cardiac-arrest patients. Post-traumatic coma is one such state regarding which the literature seriously lacks any information related to NDEs. Patients recollecting any memory of their post-traumatic coma are valuable assets for NDE researchers and needs special attention. Materials and Methods: Our present study was aimed at collecting this valuable information from survivors of severe head injury after a prolonged coma. The study was conducted in the head injury department of Guangdong 999 Brain hospital, Guangzhou, China. Patients included in the study were the ones Recovered from the posttraumatic

  19. A quantitative theory-versus-experiment comparison for the intense laser dissociation of H2+

    CERN Document Server

    Serov, V; Atabek, O; Billy, N

    2003-01-01

    A detailed theory-versus-experiment comparison is worked out for H$_2^+$ intense laser dissociation, based on angularly resolved photodissociation spectra recently recorded in H.Figger's group. As opposite to other experimental setups, it is an electric discharge (and not an optical excitation) that prepares the molecular ion, with the advantage for the theoretical approach, to neglect without lost of accuracy, the otherwise important ionization-dissociation competition. Abel transformation relates the dissociation probability starting from a single ro-vibrational state, to the probability of observing a hydrogen atom at a given pixel of the detector plate. Some statistics on initial ro-vibrational distributions, together with a spatial averaging over laser focus area, lead to photofragments kinetic spectra, with well separated peaks attributed to single vibrational levels. An excellent theory-versus-experiment agreement is reached not only for the kinetic spectra, but also for the angular distributions of fr...

  20. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    Science.gov (United States)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  1. Eulerian hydrocode modeling of a dynamic tensile extrusion experiment (u)

    Energy Technology Data Exchange (ETDEWEB)

    Burkett, Michael W [Los Alamos National Laboratory; Clancy, Sean P [Los Alamos National Laboratory

    2009-01-01

    Eulerian hydrocode simulations utilizing the Mechanical Threshold Stress flow stress model were performed to provide insight into a dynamic extrusion experiment. The dynamic extrusion response of copper (three different grain sizes) and tantalum spheres were simulated with MESA, an explicit, 2-D Eulerian continuum mechanics hydrocode and compared with experimental data. The experimental data consisted of high-speed images of the extrusion process, recovered extruded samples, and post test metallography. The hydrocode was developed to predict large-strain and high-strain-rate loading problems. Some of the features of the features of MESA include a high-order advection algorithm, a material interface tracking scheme and a van Leer monotonic advection-limiting. The Mechanical Threshold Stress (MTS) model was utilized to evolve the flow stress as a function of strain, strain rate and temperature for copper and tantalum. Plastic strains exceeding 300% were predicted in the extrusion of copper at 400 m/s, while plastic strains exceeding 800% were predicted for Ta. Quantitative comparisons between the predicted and measured deformation topologies and extrusion rate were made. Additionally, predictions of the texture evolution (based upon the deformation rate history and the rigid body rotations experienced by the copper during the extrusion process) were compared with the orientation imaging microscopy measurements. Finally, comparisons between the calculated and measured influence of the initial texture on the dynamic extrusion response of tantalum was performed.

  2. Experience economy meets business model design

    DEFF Research Database (Denmark)

    Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig

    2012-01-01

    companies automatically get a higher prize when offering an experience setting to the customer illustrated by the coffee example. Organizations that offer experiences still have an advantage but when an increasing number of organizations enter the experience economy the competition naturally gets tougher...

  3. Easy-to-use strategy for reference gene selection in quantitative real-time PCR experiments.

    Science.gov (United States)

    Klenke, Stefanie; Renckhoff, Kristina; Engler, Andrea; Peters, Jürgen; Frey, Ulrich H

    2016-12-01

    Real-time PCR is an indispensable technique for mRNA expression analysis but conclusions depend on appropriate reference gene selection. However, while reference gene selection has been a topic of publications, this issue is often disregarded when measuring target mRNA expression. Therefore, we (1) evaluated the frequency of appropriate reference gene selection, (2) suggest an easy-to-use tool for least variability reference gene selection, (3) demonstrate application of this tool, and (4) show effects on target gene expression profiles. All 2015 published articles in Naunyn-Schmiedeberg's Archives of Pharmacology were screened for the use of quantitative real-time PCR analysis and selection of reference genes. Target gene expression (Vegfa, Grk2, Sirt4, and Timp3) in H9c2 cells was analyzed following various interventions (hypoxia, hyperglycemia, and/or isoflurane exposure with and without subsequent hypoxia) in relation to putative reference genes (Actb, Gapdh, B2m, Sdha, and Rplp1) using the least variability method vs. an arbitrarily selected but established reference gene. In the vast majority (18 of 21) of papers, no information was provided regarding selection of an appropriate reference gene. In only 1 of 21 papers, a method of appropriate reference gene selection was described and in 2 papers reference gene selection remains unclear. The method of reference gene selection had major impact on interpretation of target gene expression. With hypoxia, for instance, the least variability gene was Rplp1 and target gene expression (Vefga) heavily showed a 2-fold up-regulation (p = 0.022) but no change (p = 0.3) when arbitrarily using Gapdh. Frequency of appropriate reference gene selection in this journal is low, and we propose our strategy for reference gene selection as an easy tool for proper target gene expression.

  4. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    Science.gov (United States)

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  5. Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems

    CERN Document Server

    Ndukwu, Ukachukwu

    2009-01-01

    This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems. Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM model equipped with reward structures. The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations. Finally, we demonstrate the novelty of the technique on a probabilistic library cas...

  6. Comparison of Quantitative Structure-Activity Relationship Model Performances on Carboquinone Derivatives

    Directory of Open Access Journals (Sweden)

    Sorana D. Bolboaca

    2009-01-01

    Full Text Available Quantitative structure-activity relationship (qSAR models are used to understand how the structure and activity of chemical compounds relate. In the present study, 37 carboquinone derivatives were evaluated and two different qSAR models were developed using members of the Molecular Descriptors Family (MDF and the Molecular Descriptors Family on Vertices (MDFV. The usual parameters of regression models and the following estimators were defined and calculated in order to analyze the validity and to compare the models: Akaike?s information criteria (three parameters, Schwarz (or Bayesian information criterion, Amemiya prediction criterion, Hannan-Quinn criterion, Kubinyi function, Steiger's Z test, and Akaike's weights. The MDF and MDFV models proved to have the same estimation ability of the goodness-of-fit according to Steiger's Z test. The MDFV model proved to be the best model for the considered carboquinone derivatives according to the defined information and prediction criteria, Kubinyi function, and Akaike's weights.

  7. Toward a quantitative understanding of the Wnt/ β -catenin pathway through simulation and experiment

    KAUST Repository

    Lloyd-Lewis, Bethan

    2013-03-29

    Wnt signaling regulates cell survival, proliferation, and differentiation throughout development and is aberrantly regulated in cancer. The pathway is activated when Wnt ligands bind to specific receptors on the cell surface, resulting in the stabilization and nuclear accumulation of the transcriptional co-activator β-catenin. Mathematical and computational models have been used to study the spatial and temporal regulation of the Wnt/β-catenin pathway and to investigate the functional impact of mutations in key components. Such models range in complexity, from time-dependent, ordinary differential equations that describe the biochemical interactions between key pathway components within a single cell, to complex, multiscale models that incorporate the role of the Wnt/β-catenin pathway target genes in tissue homeostasis and carcinogenesis. This review aims to summarize recent progress in mathematical modeling of the Wnt pathway and to highlight new biological results that could form the basis for future theoretical investigations designed to increase the utility of theoretical models of Wnt signaling in the biomedical arena. © 2013 Wiley Periodicals, Inc.

  8. Modelling Activities In Kinematics Understanding quantitative relations with the contribution of qualitative reasoning

    Science.gov (United States)

    Orfanos, Stelios

    2010-01-01

    In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them

  9. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    Science.gov (United States)

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  10. The Quantitative Evaluation of Functional Neuroimaging Experiments: Mutual Information Learning Curves

    DEFF Research Database (Denmark)

    Kjems, Ulrik; Hansen, Lars Kai; Anderson, Jon

    2002-01-01

    Learning curves are presented as an unbiased means for evaluating the performance of models for neuroimaging data analysis. The learning curve measures the predictive performance in terms of the generalization or prediction error as a function of the number of independent examples (e.g., subjects...

  11. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model

    Science.gov (United States)

    Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

    We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

  12. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  13. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    Science.gov (United States)

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  14. Quantitative mathematical modeling of PSA dynamics of prostate cancer patients treated with intermittent androgen suppression

    Institute of Scientific and Technical Information of China (English)

    Yoshito Hirata; Koichiro Akakura; Celestia S.Higano; Nicholas Bruchovsky; Kazuyuki Aihara

    2012-01-01

    If a mathematical model is to be used in the diagnosis,treatment,or prognosis of a disease,it must describe the inherent quantitative dynamics of the state.An ideal candidate disease is prostate cancer owing to the fact that it is characterized by an excellent biomarker,prostate-specific antigen (PSA),and also by a predictable response to treatment in the form of androgen suppression therapy.Despite a high initial response rate,the cancer will often relapse to a state of androgen independence which no longer responds to manipulations of the hormonal environment.In this paper,we present relevant background information and a quantitative mathematical model that potentially can be used in the optimal management of patients to cope with biochemical relapse as indicated by a rising PSA.

  15. e-Learning用户心理体验量化评价研究%Research on Quantitative Evaluation of e-Learning User Psychological Experience

    Institute of Scientific and Technical Information of China (English)

    吴茜媛; 张云强; 郑庆华; 付雁

    2012-01-01

    针对当前计算机系统服务质量评价缺乏从用户心理体验角度深入研究的问题,以e-Learning为背景,研究了用户心理体验的定量分析和评价,对用户心理体验进行了整体建模,分析了易用性、有用性、情绪等因素和特征对e-Learning用户心理体验的影响.采用资源覆盖率、推荐命中率等指标度量了易用性和有用性,构造了特征权重矩阵;基于层次分析法量化了整体的用户心理体验评价模型.在某高校e-Learning系统上的实际应用表明,采用所建模型能有效地发现e-Learning系统在用户心理体验方面存在的不足,可为进一步研究影响用户心理体验的情绪等特征、构建更加完善的用户心理体验量化评价方法提供参考.%Since there is lack of deeper evaluation researches on service quality from the users' psychological perspective, approaches based on e-Learning are proposed to evaluate quantitatively the quality of e-Learning user psychological experience. Models are built from the whole user psychological experience, and the influences of the features such as ease of user, usefulness and emotion on user psychological experience are analyzed and quantified using coverage of the resources and recommend hit rate, etc. Comparison matrixes and a model to quantify all user psychological experience are constructed through the analytic hierarchy process. The experiment in a Chinese university' s e-Learning system shows that the proposed approach is effective in obtaining the shortcoming of user psychological experience in e-Learning. Therefore, the proposed work framework can facilitate deeper researches on user psychological experience, such as emotion influence on user psychological experience, and offer special reference to build models to integrate user psychological experience evaluations.

  16. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors

    OpenAIRE

    Igor Shuryak; Ekaterina Dadachova

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil co...

  17. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    DEFF Research Database (Denmark)

    Han, Xue; Sandels, Claes; Zhu, Kun;

    2013-01-01

    , comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...... results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles....

  18. Quantitative Mapping of Reversible Mitochondrial Complex I Cysteine Oxidation in a Parkinson Disease Mouse Model*

    OpenAIRE

    Danielson, Steven R.; Held, Jason M.; Oo, May; Riley, Rebeccah; Gibson, Bradford W.; Andersen, Julie K.

    2011-01-01

    Differential cysteine oxidation within mitochondrial Complex I has been quantified in an in vivo oxidative stress model of Parkinson disease. We developed a strategy that incorporates rapid and efficient immunoaffinity purification of Complex I followed by differential alkylation and quantitative detection using sensitive mass spectrometry techniques. This method allowed us to quantify the reversible cysteine oxidation status of 34 distinct cysteine residues out of a total 130 present in muri...

  19. Toxicity Mechanisms of the Food Contaminant Citrinin: Application of a Quantitative Yeast Model

    OpenAIRE

    Amparo Pascual-Ahuir; Elena Vanacloig-Pedros; Markus Proft

    2014-01-01

    Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifical...

  20. Creating Research-Rich Learning Experiences and Quantitative Skills in a 1st Year Earth Systems Course

    Science.gov (United States)

    King, P. L.; Eggins, S.; Jones, S.

    2014-12-01

    We are creating a 1st year Earth Systems course at the Australian National University that is built around research-rich learning experiences and quantitative skills. The course has top students including ≤20% indigenous/foreign students; nonetheless, students' backgrounds in math and science vary considerably posing challenges for learning. We are addressing this issue and aiming to improve knowledge retention and deep learning by changing our teaching approach. In 2013-2014, we modified the weekly course structure to a 1hr lecture; a 2hr workshop with hands-on activities; a 2hr lab; an assessment piece covering all face-to-face activities; and a 1hr tutorial. Our new approach was aimed at: 1) building student confidence with data analysis and quantitative skills through increasingly difficult tasks in science, math, physics, chemistry, climate science and biology; 2) creating effective learning groups using name tags and a classroom with 8-person tiered tables; 3) requiring students to apply new knowledge to new situations in group activities, two 1-day field trips and assessment items; 4) using pre-lab and pre-workshop exercises to promote prior engagement with key concepts; 5) adding open-ended experiments to foster structured 'scientific play' or enquiry and creativity; and 6) aligning the assessment with the learning outcomes and ensuring that it contains authentic and challenging southern hemisphere problems. Students were asked to design their own ocean current experiment in the lab and we were astounded by their ingenuity: they simulated the ocean currents off Antarctica; varied water density to verify an equation; and examined the effect of wind and seafloor topography on currents. To evaluate changes in student learning, we conducted surveys in 2013 and 2014. In 2014, we found higher levels of student engagement with the course: >~80% attendance rates and >~70% satisfaction (20% neutral). The 2014 cohort felt that they were more competent in writing

  1. Integration of CFD codes and advanced combustion models for quantitative burnout determination

    Energy Technology Data Exchange (ETDEWEB)

    Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)

    2007-10-15

    CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.

  2. A quantitative model of human DNA base excision repair. I. Mechanistic insights.

    Science.gov (United States)

    Sokhansanj, Bahrad A; Rodrigue, Garry R; Fitch, J Patrick; Wilson, David M

    2002-04-15

    Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts considerably less pathway throughput than observed in experimental in vitro assays. This finding, in combination with the effects of pathway cooperativity on model throughput, supports the hypothesis of cooperation during abasic site repair and between the apurinic/apyrimidinic (AP) endonuclease, Ape1, and the 8-oxoguanine DNA glycosylase, Ogg1. The quantitative model also predicts that for 8-oxoguanine and hydrolytic AP site damage, short-patch Polbeta-mediated BER dominates, with minimal switching to the long-patch subpathway. Sensitivity analysis of the model indicates that the Polbeta-catalyzed reactions have the most control over pathway throughput, although other BER reactions contribute to pathway efficiency as well. The studies within represent a first step in a developing effort to create a predictive model for BER cellular capacity.

  3. A quantitative comparison of the TERA modeling and DFT magnetic resonance image reconstruction techniques.

    Science.gov (United States)

    Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M

    1991-05-01

    The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.

  4. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  5. Quantitative nucleation and growth kinetics of gold nanoparticles via model-assisted dynamic spectroscopic approach.

    Science.gov (United States)

    Zhou, Yao; Wang, Huixuan; Lin, Wenshuang; Lin, Liqin; Gao, Yixian; Yang, Feng; Du, Mingming; Fang, Weiping; Huang, Jiale; Sun, Daohua; Li, Qingbiao

    2013-10-01

    Lacking of quantitative experimental data and/or kinetic models that could mathematically depict the redox chemistry and the crystallization issue, bottom-to-up formation kinetics of gold nanoparticles (GNPs) remains a challenge. We measured the dynamic regime of GNPs synthesized by l-ascorbic acid (representing a chemical approach) and/or foliar aqueous extract (a biogenic approach) via in situ spectroscopic characterization and established a redox-crystallization model which allows quantitative and separate parameterization of the nucleation and growth processes. The main results were simplified as the following aspects: (I) an efficient approach, i.e., the dynamic in situ spectroscopic characterization assisted with the redox-crystallization model, was established for quantitative analysis of the overall formation kinetics of GNPs in solution; (II) formation of GNPs by the chemical and the biogenic approaches experienced a slow nucleation stage followed by a growth stage which behaved as a mixed-order reaction, and different from the chemical approach, the biogenic method involved heterogeneous nucleation; (III) also, biosynthesis of flaky GNPs was a kinetic-controlled process favored by relatively slow redox chemistry; and (IV) though GNPs formation consists of two aspects, namely the redox chemistry and the crystallization issue, the latter was the rate-determining event that controls the dynamic regime of the whole physicochemical process.

  6. A quantitative approach for comparing modeled biospheric carbon flux estimates across regional scales

    Directory of Open Access Journals (Sweden)

    D. N. Huntzinger

    2010-10-01

    Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied for comparing flux estimates in light of the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that appear to have the greatest control over the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1°×1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that control the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of these factors can be linked back to model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the quantitative approach presented here provides a set of tools for comparing predicted grid-scale fluxes across

  7. Bridging the qualitative-quantitative divide: Experiences from conducting a mixed methods evaluation in the RUCAS programme.

    Science.gov (United States)

    Makrakis, Vassilios; Kostoulas-Makrakis, Nelly

    2016-02-01

    Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Quantitative and qualitative insights into the experiences of children with Rett syndrome and their families.

    Science.gov (United States)

    Downs, Jenny; Leonard, Helen

    2016-09-01

    Rett syndrome is a rare neurodevelopmental disorder caused by a mutation in the MECP2 gene. It is associated with severe functional impairments and medical comorbidities such as scoliosis and poor growth. The population-based and longitudinal Australian Rett Syndrome Database was established in 1993 and has supported investigations of the natural history of Rett syndrome and effectiveness of treatments, as well as a suite of qualitative studies to identify deeper meanings. This paper describes the early presentation of Rett syndrome, including regression and challenges for families seeking a diagnosis. We discuss the importance of implementing strategies to enhance daily communication and movement, describe difficulties interpreting the presence of pain and discomfort, and argue for a stronger evidence base in relation to management. Finally, we outline a framework for understanding quality of life in Rett syndrome and suggest areas of life to which we can direct efforts in order to improve quality of life. Each of these descriptions is illustrated with vignettes of child and family experiences. Clinicians and researchers must continue to build this framework of knowledge and understanding with efforts committed to providing more effective treatments and supporting the best quality of life for those affected.

  9. Quantitative Regression Models for the Prediction of Chemical Properties by an Efficient Workflow.

    Science.gov (United States)

    Yin, Yongmin; Xu, Congying; Gu, Shikai; Li, Weihua; Liu, Guixia; Tang, Yun

    2015-10-01

    Rapid safety assessment is more and more needed for the increasing chemicals both in chemical industries and regulators around the world. The traditional experimental methods couldn't meet the current demand any more. With the development of the information technology and the growth of experimental data, in silico modeling has become a practical and rapid alternative for the assessment of chemical properties, especially for the toxicity prediction of organic chemicals. In this study, a quantitative regression workflow was built by KNIME to predict chemical properties. With this regression workflow, quantitative values of chemical properties can be obtained, which is different from the binary-classification model or multi-classification models that can only give qualitative results. To illustrate the usage of the workflow, two predictive models were constructed based on datasets of Tetrahymena pyriformis toxicity and Aqueous solubility. The qcv (2) and qtest (2) of 5-fold cross validation and external validation for both types of models were greater than 0.7, which implies that our models are robust and reliable, and the workflow is very convenient and efficient in prediction of various chemical properties. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Forward and adjoint radiance Monte Carlo models for quantitative photoacoustic imaging

    Science.gov (United States)

    Hochuli, Roman; Powell, Samuel; Arridge, Simon; Cox, Ben

    2015-03-01

    In quantitative photoacoustic imaging, the aim is to recover physiologically relevant tissue parameters such as chromophore concentrations or oxygen saturation. Obtaining accurate estimates is challenging due to the non-linear relationship between the concentrations and the photoacoustic images. Nonlinear least squares inversions designed to tackle this problem require a model of light transport, the most accurate of which is the radiative transfer equation. This paper presents a highly scalable Monte Carlo model of light transport that computes the radiance in 2D using a Fourier basis to discretise in angle. The model was validated against a 2D finite element model of the radiative transfer equation, and was used to compute gradients of an error functional with respect to the absorption and scattering coefficient. It was found that adjoint-based gradient calculations were much more robust to inherent Monte Carlo noise than a finite difference approach. Furthermore, the Fourier angular discretisation allowed very efficient gradient calculations as sums of Fourier coefficients. These advantages, along with the high parallelisability of Monte Carlo models, makes this approach an attractive candidate as a light model for quantitative inversion in photoacoustic imaging.

  11. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    Science.gov (United States)

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.

  12. Quantitative phase-field modeling of nonisothermal solidification in dilute multicomponent alloys with arbitrary diffusivities.

    Science.gov (United States)

    Ohno, Munekazu

    2012-11-01

    A quantitative phase-field model is developed for simulating microstructural pattern formation in nonisothermal solidification in dilute multicomponent alloys with arbitrary thermal and solutal diffusivities. By performing the matched asymptotic analysis, it is shown that the present model with antitrapping current terms reproduces the free-boundary problem of interest in the thin-interface limit. Convergence of the simulation outcome with decreasing the interface thickness is demonstrated for nonisothermal free dendritic growth in binary alloys and isothermal and nonisothermal free dendritic growth in a ternary alloy.

  13. Modeling of titration experiments by a reactive transport model

    Institute of Scientific and Technical Information of China (English)

    Ma Hongyun; Samper Javier; Xin Xin

    2011-01-01

    Acid mine drainage (AMD) is commonly treated by neutralization with alkaline substances. This treatment is supported by titration experiments that illustrate the buffering mechanisms and estimate the base neutralization capacity (BNC) of the AMD. Detailed explanation of titration curves requires modeling with a hydro-chemical model. In this study the titration curves of water samples from the drainage of the As Pontes mine and the corresponding dumps have been investigated and six buffers are selected by analyzing those curves. Titration curves have been simulated by a reactive transport model to discover the detailed buffering mechanisms. These simulations show seven regions involving different buffering mechanism. The BNC is primarily from buffers of dissolved Fe, Al and hydrogen sulfate. The BNC can be approximated by: BNC = 3(CFe + CAl) + 0.05Csulfate, where the units are mol/L. The BNC of the sample from the mine is 9.25 × 10-3 mol/L and that of the dumps sample is 1.28 × 10-2 mol/L.

  14. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Science.gov (United States)

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  15. Quantitative analysis of markers of podocyte injury in the rat puromycin aminonucleoside nephropathy model.

    Science.gov (United States)

    Kakimoto, Tetsuhiro; Okada, Kinya; Fujitaka, Keisuke; Nishio, Masashi; Kato, Tsuyoshi; Fukunari, Atsushi; Utsumi, Hiroyuki

    2015-02-01

    Podocytes are an essential component of the renal glomerular filtration barrier, their injury playing an early and important role in progressive renal dysfunction. This makes quantification of podocyte marker immunoreactivity important for early detection of glomerular histopathological changes. Here we have specifically applied a state-of-the-art automated computational method of glomerulus recognition, which we have recently developed, to study quantitatively podocyte markers in a model with selective podocyte injury, namely the rat puromycin aminonucleoside (PAN) nephropathy model. We also retrospectively investigated mRNA expression levels of these markers in glomeruli which were isolated from the same formalin-fixed, paraffin-embedded kidney samples by laser microdissection. Among the examined podocyte markers, the immunopositive area and mRNA expression level of both podoplanin and synaptopodin were decreased in PAN glomeruli. The immunopositive area of podocin showed a slight decrease in PAN glomeruli, while its mRNA level showed no change. We have also identified a novel podocyte injury marker β-enolase, which was increased exclusively by podocytes in PAN glomeruli, similarly to another widely used marker, desmin. Thus, we have shown the specific application of a state-of-the-art computational method and retrospective mRNA expression analysis to quantitatively study the changes of various podocyte markers. The proposed methods will open new avenues for quantitative elucidation of renal glomerular histopathology. Copyright © 2014 Elsevier GmbH. All rights reserved.

  16. Sensitive quantitative assays for tau and phospho-tau in transgenic mouse models

    Science.gov (United States)

    Acker, Christopher M.; Forest, Stefanie K.; Zinkowski, Ray; Davies, Peter; d’Abramo, Cristina

    2012-01-01

    Transgenic mouse models have been an invaluable resource in elucidating the complex roles of Aβ and tau in Alzheimer’s disease. While many laboratories rely on qualitative or semi-quantitative techniques when investigating tau pathology, we have developed four Low-Tau Sandwich ELISAs that quantitatively assess different epitopes of tau relevant to Alzheimer’s disease: total tau, pSer-202, pThr-231, pSer-396/404. In this study, after comparing our assays to commercially available ELISAs, we demonstrate our assays high specificity and quantitative capabilities using brain homogenates from tau transgenic mice, htau, JNPL3, tau KO mice. All four ELISAs show excellent specificity for mouse and human tau, with no reactivity to tau KO animals. An age dependent increase of serum tau in both tau transgenic models was also seen. Taken together, these assays are valuable methods to quantify tau and phospho-tau levels in transgenic animals, by examining tau levels in brain and measuring tau as a potential serum biomarker. PMID:22727277

  17. A hierarchical statistical model for estimating population properties of quantitative genes

    Directory of Open Access Journals (Sweden)

    Wu Rongling

    2002-06-01

    Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.

  18. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  19. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  20. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article.

  1. A training set selection strategy for a universal near-infrared quantitative model.

    Science.gov (United States)

    Jia, Yan-Hua; Liu, Xu-Ping; Feng, Yan-Chun; Hu, Chang-Qin

    2011-06-01

    The purpose of this article is to propose an empirical solution to the problem of how many clusters of complex samples should be selected to construct the training set for a universal near infrared quantitative model based on the Naes method. The sample spectra were hierarchically classified into clusters by Ward's algorithm and Euclidean distance. If the sample spectra were classified into two clusters, the 1/50 of the largest Heterogeneity value in the cluster with larger variation was set as the threshold to determine the total number of clusters. One sample was then randomly selected from each cluster to construct the training set, and the number of samples in training set equaled the number of clusters. In this study, 98 batches of rifampicin capsules with API contents ranging from 50.1% to 99.4% were studied with this strategy. The root mean square errors of cross validation and prediction were 2.54% and 2.31% for the model for rifampicin capsules, respectively. Then, we evaluated this model in terms of outlier diagnostics, accuracy, precision, and robustness. We also used the strategy of training set sample selection to revalidate the models for cefradine capsules, roxithromycin tablets, and erythromycin ethylsuccinate tablets, and the results were satisfactory. In conclusion, all results showed that this training set sample selection strategy assisted in the quick and accurate construction of quantitative models using near-infrared spectroscopy.

  2. A semi-quantitative model for risk appreciation and risk weighing

    DEFF Research Database (Denmark)

    Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko

    2009-01-01

    Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... of health impact categorization that has been successfully in force for several years within several emergency planning programs. Four health impact categories are distinguished: No-Health Impact, Low-Health Impact, Moderate-Health Impact and Severe-Health Impact. Two different charts are presented...

  3. MODEL IMPROVEMENT AND EXPERI-MENT VALIDATION OF PNEUMATIC ARTIFICIAL MUSCLES

    Institute of Scientific and Technical Information of China (English)

    Zhou Aiguo; Shi Guanglin; Zhong Tingxiu

    2004-01-01

    According to the deficiency of the present model of pneumatic artificial muscles (PAM), a serial model is built up based on the PAM's essential working principle with the elastic theory, it is validated by the quasi-static and dynamic experiment results, which are gained from two experiment systems.The experiment results and the simulation results illustrate that the serial model has made a great success compared with Chou's model, which can describe the force characteristics of PAM more precisely.A compensation item considering the braid's elasticity and the coulomb damp is attached to the serial model based on the analysis of the experiment results.The dynamic experiment proves that the viscous damp of the PAM could be ignored in order to simplify the model of PAM.Finally, an improved serial model of PAM is obtained.

  4. Designing Experiments for Nonlinear Models - An Introduction

    OpenAIRE

    Johnson, Rachel T.; Montgomery, Douglas C.

    2009-01-01

    The article of record as published may be found at http://dx.doi.org/10.1002/qre.1063 We illustrate the construction of Bayesian D-optimal designs for nonlinear models and compare the relative efficiency of standard designs with these designs for several models and prior distributions on the parameters. Through a relative efficiency analysis, we show that standard designs can perform well in situations where the nonlinear model is intrinsically linear. However, if the model is non...

  5. 定量小包装中药饮片使用体会%Application Experience of Chinese Herbal Pieces in Quantitative and Small Package

    Institute of Scientific and Technical Information of China (English)

    王卡珂

    2015-01-01

    Chinese herbal pieces in quantitative and small package were qualifiedly processed Chinese herbal pieces distributed into small package of different specifications according to the clinical commonly used dose. Chinese herbal pieces in quantitative and small package is a kind of new deployment model of Chinese herbal pieces. It has the advantage of no need to weigh, high speed of ,regulation and distribution, saving resources, improving the degree of sanitary, directly regulation and distribution by the pharmacist, accurate dose and formulation of high efficiency, etc. Based on the above advantages, this paper mainly summarized the experience of the application of Chinese herbal pieces in quantitative and small package.%定量小包装的中药饮片指的是加工合格后的中药饮片,并按照临床上的常用剂量通过一定的包装分成不同规格的小包装的药剂。中药饮片定量小包装是当前一种新型中药饮片的调配模式,其不需要称量、调配速度快、节约资源、提高卫生度并且直接由药剂师进行调配,并有着剂量准确与配方的效率高等优点。基于此,本文主要对使用定量小包装的中药饮片体会进行分析。

  6. Freight modelling: an overview of international experiences

    NARCIS (Netherlands)

    Tavasszy, L.A.

    2008-01-01

    Compared to passenger transportation modelling, the field of freight modelling is relatively young and developing quickly into different directions all over the world. The objective of this paper is to summarize the international state of the art in freight modelling, with a focus on developments in

  7. Linking antisocial behavior, substance use, and personality: an integrative quantitative model of the adult externalizing spectrum.

    Science.gov (United States)

    Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Benning, Stephen D; Kramer, Mark D

    2007-11-01

    Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena.

  8. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models

    Science.gov (United States)

    Rieger, TR; Musante, CJ

    2016-01-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  9. Precise Quantitative Analysis of Probabilistic Business Process Model and Notation Workflows

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for modeling and analysis of real-world business workflows. We present a formalized core subset of the business process modeling and notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations....... We present an algorithm for the translation of such models into Markov decision processes (MDP) expressed in the syntax of the PRISM model checker. This enables precise quantitative analysis of business processes for the following properties: transient and steady-state probabilities, the timing......, occurrence and ordering of events, reward-based properties, and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover...

  10. Quantitative performance metrics for stratospheric-resolving chemistry-climate models

    Directory of Open Access Journals (Sweden)

    D. W. Waugh

    2008-06-01

    Full Text Available A set of performance metrics is applied to stratospheric-resolving chemistry-climate models (CCMs to quantify their ability to reproduce key processes relevant for stratospheric ozone. The same metrics are used to assign a quantitative measure of performance ("grade" to each model-observations comparison shown in Eyring et al. (2006. A wide range of grades is obtained, both for different diagnostics applied to a single model and for the same diagnostic applied to different models, highlighting the wide range in ability of the CCMs to simulate key processes in the stratosphere. No model scores high or low on all tests, but differences in the performance of models can be seen, especially for transport processes where several models get low grades on multiple tests. The grades are used to assign relative weights to the CCM projections of 21st century total ozone. However, only small differences are found between weighted and unweighted multi-model mean total ozone projections. This study raises several issues with the grading and weighting of CCMs that need further examination, but it does provide a framework that will enable quantification of model improvements and assignment of relative weights to the model projections.

  11. Quantitative model of cell cycle arrest and cellular senescence in primary human fibroblasts.

    Directory of Open Access Journals (Sweden)

    Sascha Schäuble

    Full Text Available Primary human fibroblasts in tissue culture undergo a limited number of cell divisions before entering a non-replicative "senescent" state. At early population doublings (PD, fibroblasts are proliferation-competent displaying exponential growth. During further cell passaging, an increasing number of cells become cell cycle arrested and finally senescent. This transition from proliferating to senescent cells is driven by a number of endogenous and exogenous stress factors. Here, we have developed a new quantitative model for the stepwise transition from proliferating human fibroblasts (P via reversibly cell cycle arrested (C to irreversibly arrested senescent cells (S. In this model, the transition from P to C and to S is driven by a stress function γ and a cellular stress response function F which describes the time-delayed cellular response to experimentally induced irradiation stress. The application of this model based on senescence marker quantification at the single-cell level allowed to discriminate between the cellular states P, C, and S and delivers the transition rates between the P, C and S states for different human fibroblast cell types. Model-derived quantification unexpectedly revealed significant differences in the stress response of different fibroblast cell lines. Evaluating marker specificity, we found that SA-β-Gal is a good quantitative marker for cellular senescence in WI-38 and BJ cells, however much less so in MRC-5 cells. Furthermore we found that WI-38 cells are more sensitive to stress than BJ and MRC-5 cells. Thus, the explicit separation of stress induction from the cellular stress response, and the differentiation between three cellular states P, C and S allows for the first time to quantitatively assess the response of primary human fibroblasts towards endogenous and exogenous stress during cellular ageing.

  12. Quantitative inverse modelling of a cylindrical object in the laboratory using ERT: An error analysis

    Science.gov (United States)

    Korteland, Suze-Anne; Heimovaara, Timo

    2015-03-01

    Electrical resistivity tomography (ERT) is a geophysical technique that can be used to obtain three-dimensional images of the bulk electrical conductivity of the subsurface. Because the electrical conductivity is strongly related to properties of the subsurface and the flow of water it has become a valuable tool for visualization in many hydrogeological and environmental applications. In recent years, ERT is increasingly being used for quantitative characterization, which requires more detailed prior information than a conventional geophysical inversion for qualitative purposes. In addition, the careful interpretation of measurement and modelling errors is critical if ERT measurements are to be used in a quantitative way. This paper explores the quantitative determination of the electrical conductivity distribution of a cylindrical object placed in a water bath in a laboratory-scale tank. Because of the sharp conductivity contrast between the object and the water, a standard geophysical inversion using a smoothness constraint could not reproduce this target accurately. Better results were obtained by using the ERT measurements to constrain a model describing the geometry of the system. The posterior probability distributions of the parameters describing the geometry were estimated with the Markov chain Monte Carlo method DREAM(ZS). Using the ERT measurements this way, accurate estimates of the parameters could be obtained. The information quality of the measurements was assessed by a detailed analysis of the errors. Even for the uncomplicated laboratory setup used in this paper, errors in the modelling of the shape and position of the electrodes and the shape of the domain could be identified. The results indicate that the ERT measurements have a high information content which can be accessed by the inclusion of prior information and the consideration of measurement and modelling errors.

  13. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  14. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway.

    Science.gov (United States)

    Liu, Chang; Lin, Boliang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian

    2017-01-01

    Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway.

  15. Growth mixture modeling as an exploratory analysis tool in longitudinal quantitative trait loci analysis.

    Science.gov (United States)

    Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J

    2009-12-15

    We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.

  16. Experimental Research on Quantitative Inversion Models of Suspended Sediment Concentration Using Remote Sensing Technology

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Research on quantitative models of suspended sediment concentration (SSC) using remote sensing technology is very important to understand the scouring and siltation variation in harbors and water channels. Based on laboratory study of the relationship between different suspended sediment concentrations and reflectance spectra measured synchronously, quantitative inversion models of SSC based on single factor, band ratio and sediment parameter were developed, which provides an effective method to retrieve the SSC from satellite images. Results show that the b1 (430-500nm) and b3 (670-735nm) are the optimal wavelengths for the estimation of lower SSC and the b4 (780-835nm) is the optimal wavelength to estimate the higher SSC. Furthermore the band ratio B2/B3 can be used to simulate the variation of lower SSC better and the B4/B1 to estimate the higher SSC accurately. Also the inversion models developed by sediment parameters of higher and lower SSCs can get a relatively higher accuracy than the single factor and band ratio models.

  17. Research on Quantitative Models of Electric Vehicle Charging Stations Based on Principle of Energy Equivalence

    Directory of Open Access Journals (Sweden)

    Zhenpo Wang

    2013-01-01

    Full Text Available In order to adapt the matching and planning requirements of charging station in the electric vehicle (EV marketization application, with related layout theories of the gas stations, a location model of charging stations is established based on electricity consumption along the roads among cities. And a quantitative model of charging stations is presented based on the conversion of oil sales in a certain area. Both are combining the principle based on energy consuming equivalence substitution in process of replacing traditional vehicles with EVs. Defined data are adopted in the example analysis of two numerical case models and analyze the influence on charging station layout and quantity from the factors like the proportion of vehicle types and the EV energy consumption at the same time. The results show that the quantitative model of charging stations is reasonable and feasible. The number of EVs and the energy consumption of EVs bring more significant impact on the number of charging stations than that of vehicle type proportion, which provides a basis for decision making for charging stations construction layout in reality.

  18. Quantitative analysis of anaerobic oxidation of methane (AOM) in marine sediments: A modeling perspective

    Science.gov (United States)

    Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.

    2011-05-01

    Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.

  19. The Framed Standard Model (II) - A first Test against Experiment

    CERN Document Server

    Chan, HM

    2015-01-01

    Apart from the qualitative features described in \\cite{chm}, the renormalization group equation derived for the rotation of the fermion mass matrices are amenable to quantitative study. The equation depends on a coupling and a fudge factor and, on integration, on 3 integration constants. Its application to data analysis, however, requires the input from experiment of the heaviest generation masses $m_t, m_b, m_\\tau, m_{\

  20. Cohesive mixed mode fracture modelling and experiments

    DEFF Research Database (Denmark)

    Walter, Rasmus; Olesen, John Forbes

    2008-01-01

    A nonlinear mixed mode model originally developed by Wernersson [Wernersson H. Fracture characterization of wood adhesive joints. Report TVSM-1006, Lund University, Division of Structural Mechanics; 1994], based on nonlinear fracture mechanics, is discussed and applied to model interfacial cracking...... in a steel–concrete interface. The model is based on the principles of Hillerborgs fictitious crack model, however, the Mode I softening description is modified taking into account the influence of shear. The model couples normal and shear stresses for a given combination of Mode I and II fracture...... curves, which may be interpreted using the nonlinear mixed mode model. The interpretation of test results is carried out in a two step inverse analysis applying numerical optimization tools. It is demonstrated how to perform the inverse analysis, which couples the assumed individual experimental load...

  1. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...

  2. Bone quantitative susceptibility mapping using a chemical species-specific R2* signal model with ultrashort and conventional echo data.

    Science.gov (United States)

    Dimov, Alexey V; Liu, Zhe; Spincemaille, Pascal; Prince, Martin R; Du, Jiang; Wang, Yi

    2017-03-05

    To develop quantitative susceptibility mapping (QSM) of bone using an ultrashort echo time (UTE) gradient echo (GRE) sequence for signal acquisition and a bone-specific effective transverse relaxation rate ( R2*) to model water-fat MR signals for field mapping. Three-dimensional radial UTE data (echo times ≥ 40 μs) was acquired on a 3 Tesla scanner and fitted with a bone-specific signal model to map the chemical species and susceptibility field. Experiments were performed ex vivo on a porcine hoof and in vivo on healthy human subjects (n = 7). For water-fat separation, a bone-specific model assigning R2* decay mostly to water was compared with the standard models that assigned the same decay for both fat and water. In the ex vivo experiment, bone QSM was correlated with CT. Compared with standard models, the bone-specific R2* method significantly reduced errors in the fat fraction within the cortical bone in all tested data sets, leading to reduced artifacts in QSM. Good correlation was found between bone CT and QSM values in the porcine hoof (R(2)  = 0.77). Bone QSM was successfully generated in all subjects. The QSM of bone is feasible using UTE with a conventional echo time GRE acquisition and a bone-specific R2* signal model. Magn Reson Med 000:000-000, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. A Quantitative Model of Keyhole Instability Induced Porosity in Laser Welding of Titanium Alloy

    Science.gov (United States)

    Pang, Shengyong; Chen, Weidong; Wang, Wen

    2014-06-01

    Quantitative prediction of the porosity defects in deep penetration laser welding has generally been considered as a very challenging task. In this study, a quantitative model of porosity defects induced by keyhole instability in partial penetration CO2 laser welding of a titanium alloy is proposed. The three-dimensional keyhole instability, weld pool dynamics, and pore formation are determined by direct numerical simulation, and the results are compared to prior experimental results. It is shown that the simulated keyhole depth fluctuations could represent the variation trends in the number and average size of pores for the studied process conditions. Moreover, it is found that it is possible to use the predicted keyhole depth fluctuations as a quantitative measure of the average size of porosity. The results also suggest that due to the shadowing effect of keyhole wall humps, the rapid cooling of the surface of the keyhole tip before keyhole collapse could lead to a substantial decrease in vapor pressure inside the keyhole tip, which is suggested to be the mechanism by which shielding gas enters into the porosity.

  4. Three-dimensional modeling and quantitative analysis of gap junction distributions in cardiac tissue.

    Science.gov (United States)

    Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W

    2011-11-01

    Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.

  5. Turbulent Boundary Layers - Experiments, Theory and Modelling

    Science.gov (United States)

    1980-01-01

    DEVELOPMENT (ORGANISATION DU TRAITE DE L’ATLANTIQUE NORD ) AGARD Conference Proceedings No.271 TURBULENT BOUNDARY LAYERS - EXPERIMENTS, THEORY AND...photographs of Figures 21 and 22. In this case, the photographs are taken with a single flash strobe and thus yield the instantaneous positions of the

  6. Nuclear reaction modeling, verification experiments, and applications

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  7. Multivariable wavelet finite element-based vibration model for quantitative crack identification by using particle swarm optimization

    Science.gov (United States)

    Zhang, Xingwu; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Sun, Chuang; Yang, Zhibo

    2016-08-01

    Crack is one of the crucial causes of structural failure. A methodology for quantitative crack identification is proposed in this paper based on multivariable wavelet finite element method and particle swarm optimization. First, the structure with crack is modeled by multivariable wavelet finite element method (MWFEM) so that the vibration parameters of the first three natural frequencies in arbitrary crack conditions can be obtained, which is named as the forward problem. Second, the structure with crack is tested to obtain the vibration parameters of first three natural frequencies by modal testing and advanced vibration signal processing method. Then, the analyzed and measured first three natural frequencies are combined together to obtain the location and size of the crack by using particle swarm optimization. Compared with traditional wavelet finite element method, MWFEM method can achieve more accurate vibration analysis results because it interpolates all the solving variables at one time, which makes the MWFEM-based method to improve the accuracy in quantitative crack identification. In the end, the validity and superiority of the proposed method are verified by experiments of both cantilever beam and simply supported beam.

  8. Mathematical Models for Quantitative Assessment of Bioluminescence Resonance Energy Transfer (BRET: Application to Seven Transmembrane Receptors (7TMRs Oligomerization

    Directory of Open Access Journals (Sweden)

    Luka eDrinovec

    2012-08-01

    Full Text Available The idea that seven transmembrane receptors (7TMRs; also designated G-protein coupled receptors (GPCRs might form dimers or higher order oligomeric complexes was formulated more than 20 years ago and has been intensively studied since then. In the last decade, bioluminescence resonance energy transfer (BRET has been one of the most frequently used biophysical methods for studying 7TMRs oligomerization. This technique enables monitoring physical interactions between protein partners in living cells fused to donor and acceptor moieties. It relies on non-radiative transfer of energy between donor and acceptor, depending on their intermolecular distance (1–10 nm and relative orientation. Results derived from BRET-based techniques are very persuasive; however, they need appropriate controls and critical interpretation. To overcome concerns about the specificity of BRET-derived results, a set of experiments has been proposed, including negative control with a non-interacting receptor or protein, BRET dilution, saturation and competition assays. This article presents the theoretical background behind BRET assays, then outlines mathematical models for quantitative interpretation of BRET saturation and competition assay results, gives examples of their utilization and discusses the possibilities of quantitative analysis of data generated with other RET-based techniques.

  9. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  10. Experiments and modelling on vertical flame spread

    Energy Technology Data Exchange (ETDEWEB)

    Keski-Rahkonen, O.; Mangs, J. [VTT Building and Transport, Espoo (Finland)

    2004-07-01

    he principle and some preliminary results are shown of a new vertical flame spread modelling effort. Quick experimental screenings on relevant phenomena are made, some models are evaluated, and a new set of needed measuring instruments is proposed. Finally a single example of FRNC cable is shown as application of the methods. (orig.)

  11. Impact Assessment of Abiotic Resources in LCA: Quantitative Comparison of Selected Characterization Models

    DEFF Research Database (Denmark)

    Rørbech, Jakob Thaysen; Vadenbo, Carl; Hellweg, Stefanie

    2014-01-01

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment...... results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247...... groups, according to method focus and modeling approach, to aid method selection within LCA....

  12. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  13. An equivalent magnetic dipoles model for quantitative damage recognition of broken wire

    Institute of Scientific and Technical Information of China (English)

    TAN Ji-wen; ZHAN Wei-xia; LI Chun-jing; WEN Yan; SHU Jie

    2005-01-01

    By simplifying saturatedly magnetized wire-rope to magnetic dipoles of the same magnetic field strength, an equivalent magnetic dipoles model is developed and the measuring principle for recognising damage of broken wire was presented. The relevant calculation formulas were also deduced. A composite solution method about nonlinear optimization was given. An example was given to illustrate the use of the equivalent magnetic dipoles method for quantitative damage recognition, and demonstrates that the result of this method is consistent with the real situation, so the method is valid and practical.

  14. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.

  15. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies.

  16. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  17. Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters

    Science.gov (United States)

    Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad

    2017-02-01

    The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.

  18. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  19. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    Science.gov (United States)

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  20. Modeling Morphogenesis in silico and in vitro: Towards Quantitative, Predictive, Cell-based Modeling

    NARCIS (Netherlands)

    R.M.H. Merks (Roeland); P. Koolwijk

    2009-01-01

    htmlabstractCell-based, mathematical models help make sense of morphogenesis—i.e. cells organizing into shape and pattern—by capturing cell behavior in simple, purely descriptive models. Cell-based models then predict the tissue-level patterns the cells produce collectively. The first

  1. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  2. Quantitative computational models of molecular self-assembly in systems biology

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-06-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  3. Development and assessment of quantitative structureactivity relationship models for bioconcentration factors of organic pollutants

    Institute of Scientific and Technical Information of China (English)

    QIN Hong; CHEN JingWen; WANG Ying; WANG Bin; LI XueHua; LI Fei; WANG YaNan

    2009-01-01

    Bioconcentration factors (BCFs) are of great importance for ecological risk assessment of organic chemicals. In this study, a quantitative structure-activity relationship (QSAR) model for fish BCFs of 8 groups of compounds was developed employing partial least squares (PLS) regression, based on lin-ear solvation energy relationship (LSER) theory and theoretical molecular structural descriptors. The guidelines for development and validation of QSAR models proposed by the Organization for Economic Co-operation and Development (OECD) were followed. The model results show that the main factors governing IogBCF are Connolly molecular area (CMA), average molecular polarizability (α) and mo-lecular weight (Mw). Thus molecular size plays a critical role in affecting the bioconcentration of organic pollutants in fish. For the established model, the multiple correlation coefficient square (R2Y)2=0.868, the root mean square error (RMSE)=0.553 log units, and the leave-many-out cross-validated Q2CUM=0.860, indicating its good goodness-of-fit and robustness. The model predictivity was evaluated by external validation, with the external explained variance (Q2EXT)=0.755 and RMSE=0.647 log units. Moreover, the applicability domain of the developed model was assessed and visualized by the Williams plot. The developed QSAR model can be used to predict fish logBCF for organic chemicals within the application domain.

  4. Quantitative ultrasound molecular imaging by modeling the binding kinetics of targeted contrast agent.

    Science.gov (United States)

    Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo

    2017-03-21

    Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into

  5. Quantitative ultrasound molecular imaging by modeling the binding kinetics of targeted contrast agent

    Science.gov (United States)

    Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo

    2017-03-01

    Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into

  6. Modelling plant interspecific interactions from experiments of perennial crop mixtures to predict optimal combinations.

    Science.gov (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-07-28

    The contribution of plant species richness to productivity and ecosystem functioning is a long standing issue in Ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modelling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modelled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e. a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficientsfrom, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modelling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Silicon Carbide Derived Carbons: Experiments and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kertesz, Miklos [Georgetown University, Washington DC 20057

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  8. Quantitative evaluation and modeling of two-dimensional neovascular network complexity: the surface fractal dimension

    Directory of Open Access Journals (Sweden)

    Franceschini Barbara

    2005-02-01

    Full Text Available Abstract Background Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. Methods This paper introduces the surface fractal dimension (Ds as a numerical index of the two-dimensional (2-D geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. Results We show that Ds significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Conclusions Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth.

  9. Energy-dependent fitness: a quantitative model for the evolution of yeast transcription factor binding sites.

    Science.gov (United States)

    Mustonen, Ville; Kinney, Justin; Callan, Curtis G; Lässig, Michael

    2008-08-26

    We present a genomewide cross-species analysis of regulation for broad-acting transcription factors in yeast. Our model for binding site evolution is founded on biophysics: the binding energy between transcription factor and site is a quantitative phenotype of regulatory function, and selection is given by a fitness landscape that depends on this phenotype. The model quantifies conservation, as well as loss and gain, of functional binding sites in a coherent way. Its predictions are supported by direct cross-species comparison between four yeast species. We find ubiquitous compensatory mutations within functional sites, such that the energy phenotype and the function of a site evolve in a significantly more constrained way than does its sequence. We also find evidence for substantial evolution of regulatory function involving point mutations as well as sequence insertions and deletions within binding sites. Genes lose their regulatory link to a given transcription factor at a rate similar to the neutral point mutation rate, from which we infer a moderate average fitness advantage of functional over nonfunctional sites. In a wider context, this study provides an example of inference of selection acting on a quantitative molecular trait.

  10. A genome-screen experiment to detect quantitative trait loci affecting resistance to facial eczema disease in sheep.

    Science.gov (United States)

    Phua, S H; Dodds, K G; Morris, C A; Henry, H M; Beattie, A E; Garmonsway, H G; Towers, N R; Crawford, A M

    2009-02-01

    Facial eczema (FE) is a secondary photosensitization disease arising from liver cirrhosis caused by the mycotoxin sporidesmin. The disease affects sheep, cattle, deer and goats, and costs the New Zealand sheep industry alone an estimated NZ$63M annually. A long-term sustainable solution to this century-old FE problem is to breed for disease-resistant animals by marker-assisted selection. As a step towards finding a diagnostic DNA test for FE sensitivity, we have conducted a genome-scan experiment to screen for quantitative trait loci (QTL) affecting this trait in Romney sheep. Four F(1) sires, obtained from reciprocal matings of FE resistant and susceptible selection-line animals, were used to generate four outcross families. The resulting half-sib progeny were artificially challenged with sporidesmin to phenotype their FE traits measured in terms of their serum levels of liver-specific enzymes, namely gamma-glutamyl transferase and glutamate dehydrogenase. In a primary screen using selective genotyping on extreme progeny of each family, a total of 244 DNA markers uniformly distributed over all 26 ovine autosomes (with an autosomal genome coverage of 79-91%) were tested for linkage to the FE traits. Data were analysed using Haley-Knott regression. The primary screen detected one significant and one suggestive QTL on chromosomes 3 and 8 respectively. Both the significant and suggestive QTL were followed up in a secondary screen where all progeny were genotyped and analysed; the QTL on chromosome 3 was significant in this analysis.

  11. Quantitative studies and taste re-engineering experiments toward the decoding of the nonvolatile sensometabolome of Gouda cheese.

    Science.gov (United States)

    Toelstede, Simone; Hofmann, Thomas

    2008-07-09

    The first comprehensive quantitative determination of 49 putative taste-active metabolites and mineral salts in 4- and 44-week-ripened Gouda cheese, respectively, has been performed; the ranking of these compounds in their sensory impact based on dose-over-threshold (DoT) factors, followed by the confirmation of their sensory relevance by taste reconstruction and omission experiments enabled the decoding of the nonvolatile sensometabolome of Gouda cheese. The bitterness of the cheese matured for 44 weeks was found to be induced by CaCl2 and MgCl2, as well as various bitter-tasting free amino acids, whereas bitter peptides were found to influence more the bitterness quality rather than the bitter intensity of the cheese. The DoT factors determined for the individual bitter peptides gave strong evidence that their sensory contribution is mainly due to the decapeptide YPFPGPIHNS and the nonapeptides YPFPGPIPN and YPFPGPIHN, assigned to the casein sequences beta-CN(60-69) and beta-CN(60-68), respectively, as well as the tetrapeptide LPQE released from alphas1-CN(11-14). Lactic acid and hydrogen phosphate were identified to play the key role for the sourness of Gouda cheese, whereas umami taste was found to be due to monosodium L-glutamate and sodium lactate. Moreover, saltiness was induced by sodium chloride and sodium phosphate and was demonstrated to be significantly enhanced by L-arginine.

  12. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  13. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  14. Evaporation experiments and modelling for glass melts

    NARCIS (Netherlands)

    Limpt, J.A.C. van; Beerkens, R.G.C.

    2007-01-01

    A laboratory test facility has been developed to measure evaporation rates of different volatile components from commercial and model glass compositions. In the set-up the furnace atmosphere, temperature level, gas velocity and batch composition are controlled. Evaporation rates have been measured

  15. Experiments with the Danish Eulerian Model

    DEFF Research Database (Denmark)

    Zlatev, Z.; Dimov, I.; Georgiev, K.

    1995-01-01

    Contains the materials of the school for young scientists, which were available before the BIOMATH-95 Conference. (International Symposium and Young Scientists School on Mathematical Modelling and Information Systems in Biology, Ecology and Medicine, Sofia, Bulgaria, August 23-27, 1995). The abst......). The abstracts of the BIOMATH-95 conference are published in a seperate volume....

  16. Evaporation experiments and modelling for glass melts

    NARCIS (Netherlands)

    Limpt, J.A.C. van; Beerkens, R.G.C.

    2007-01-01

    A laboratory test facility has been developed to measure evaporation rates of different volatile components from commercial and model glass compositions. In the set-up the furnace atmosphere, temperature level, gas velocity and batch composition are controlled. Evaporation rates have been measured f

  17. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  18. Quantitative Circulatory Physiology: an integrative mathematical model of human physiology for medical education.

    Science.gov (United States)

    Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L

    2007-06-01

    We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.

  19. An integrated qualitative and quantitative modeling framework for computer‐assisted HAZOP studies

    DEFF Research Database (Denmark)

    Wu, Jing; Zhang, Laibin; Hu, Jinqiu

    2014-01-01

    and validated on a case study concerning a three‐phase separation process. The multilevel flow modeling (MFM) methodology is used to represent the plant goals and functions. First, means‐end analysis is used to identify and formulate the intention of the process design in terms of components, functions...... safety critical operations, its causes and consequences. The outcome is a qualitative hazard analysis of selected process deviations from normal operations and their consequences as input to a traditional HAZOP table. The list of unacceptable high risk deviations identified by the qualitative HAZOP...... analysis is used as input for rigorous analysis and evaluation by the quantitative analysis part of the framework. To this end, dynamic first‐principles modeling is used to simulate the system behavior and thereby complement the results of the qualitative analysis part. The practical framework for computer...

  20. A system for quantitative morphological measurement and electronic modelling of neurons: three-dimensional reconstruction.

    Science.gov (United States)

    Stockley, E W; Cole, H M; Brown, A D; Wheal, H V

    1993-04-01

    A system for accurately reconstructing neurones from optical sections taken at high magnification is described. Cells are digitised on a 68000-based microcomputer to form a database consisting of a series of linked nodes each consisting of x, y, z coordinates and an estimate of dendritic diameter. This database is used to generate three-dimensional (3-D) displays of the neurone and allows quantitative analysis of the cell volume, surface area and dendritic length. Images of the cell can be manipulated locally or transferred to an IBM 3090 mainframe where a wireframe model can be displayed on an IBM 5080 graphics terminal and rotated interactively in real time, allowing visualisation of the cell from all angles. Space-filling models can also be produced. Reconstructions can also provide morphological data for passive electrical simulations of hippocampal pyramidal cells.

  1. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    CERN Document Server

    Démoulin, P; Masías-Meza, J J; Dasso, S

    2016-01-01

    Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs) while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, so along a 1D cut. As such, the data only give a partial view of their 3D structures. By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. In a first approach we theoretically obtain the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compare their compatibility with observed distributions. In a second approach we use the shock normal and the flux rope axis orientations, as well as the impact parameter, to provide statistical information across the spacecraft trajectory. The study of different 3D shock models shows that the observations are compatible with a ...

  2. Model exploration and analysis for quantitative safety refinement in probabilistic B

    CERN Document Server

    Ndukwu, Ukachukwu; 10.4204/EPTCS.55.7

    2011-01-01

    The role played by counterexamples in standard system analysis is well known; but less common is a notion of counterexample in probabilistic systems refinement. In this paper we extend previous work using counterexamples to inductive invariant properties of probabilistic systems, demonstrating how they can be used to extend the technique of bounded model checking-style analysis for the refinement of quantitative safety specifications in the probabilistic B language. In particular, we show how the method can be adapted to cope with refinements incorporating probabilistic loops. Finally, we demonstrate the technique on pB models summarising a one-step refinement of a randomised algorithm for finding the minimum cut of undirected graphs, and that for the dependability analysis of a controller design.

  3. Partial least squares modeling and genetic algorithm optimization in quantitative structure-activity relationships.

    Science.gov (United States)

    Hasegawa, K; Funatsu, K

    2000-01-01

    Quantitative structure-activity relationship (QSAR) studies based on chemometric techniques are reviewed. Partial least squares (PLS) is introduced as a novel robust method to replace classical methods such as multiple linear regression (MLR). Advantages of PLS compared to MLR are illustrated with typical applications. Genetic algorithm (GA) is a novel optimization technique which can be used as a search engine in variable selection. A novel hybrid approach comprising GA and PLS for variable selection developed in our group (GAPLS) is described. The more advanced method for comparative molecular field analysis (CoMFA) modeling called GA-based region selection (GARGS) is described as well. Applications of GAPLS and GARGS to QSAR and 3D-QSAR problems are shown with some representative examples. GA can be hybridized with nonlinear modeling methods such as artificial neural networks (ANN) for providing useful tools in chemometric and QSAR.

  4. Estimation of financial loss ratio for E-insurance:a quantitative model

    Institute of Scientific and Technical Information of China (English)

    钟元生; 陈德人; 施敏华

    2002-01-01

    In view of the risk of E-commerce and the response of the insurance industry to it, this paper is aimed at one important point of insurance, that is, estimation of financial loss ratio, which is one of the most difficult problems facing the E-insurance industry. This paper proposes a quantitative analyzing model for estimating E-insurance financial loss ratio. The model is based on gross income per enterprise and CSI/FBI computer crime and security survey. The analysis results presented are reasonable and valuable for both insurer and the insured and thus can be accepted by both of them. What we must point out is that according to our assumption, the financial loss ratio varied very little, 0.233% in 1999 and 0.236% in 2000 although there was much variation in the main data of the CSI/FBI survey.

  5. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case.

  6. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    Science.gov (United States)

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  7. [Quantitative models between canopy hyperspectrum and its component features at apple tree prosperous fruit stage].

    Science.gov (United States)

    Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang

    2010-10-01

    Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.

  8. Flexible robot control: Modeling and experiments

    Science.gov (United States)

    Oppenheim, Irving J.; Shimoyama, Isao

    1989-01-01

    Described here is a model and its use in experimental studies of flexible manipulators. The analytical model uses the equivalent of Rayleigh's method to approximate the displaced shape of a flexible link as the static elastic displacement which would occur under end rotations as applied at the joints. The generalized coordinates are thereby expressly compatible with joint motions and rotations in serial link manipulators, because the amplitude variables are simply the end rotations between the flexible link and the chord connecting the end points. The equations for the system dynamics are quite simple and can readily be formulated for the multi-link, three-dimensional case. When the flexible links possess mass and (polar moment of) inertia which are small compared to the concentrated mass and inertia at the joints, the analytical model is exact and displays the additional advantage of reduction in system dimension for the governing equations. Four series of pilot tests have been completed. Studies on a planar single-link system were conducted at Carnegie-Mellon University, and tests conducted at Toshiba Corporation on a planar two-link system were then incorporated into the study. A single link system under three-dimensional motion, displaying biaxial flexure, was then tested at Carnegie-Mellon.

  9. Pushing the Frontier of Data-Oriented Geodynamic Modeling: from Qualitative to Quantitative to Predictive

    Science.gov (United States)

    Liu, L.; Hu, J.; Zhou, Q.

    2016-12-01

    The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures

  10. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.

    Directory of Open Access Journals (Sweden)

    Alexey A Gritsenko

    2015-08-01

    Full Text Available Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP, a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.

  11. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.

    Science.gov (United States)

    Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick

    2015-08-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.

  12. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015.

    Directory of Open Access Journals (Sweden)

    Pawel Sobkowicz

    Full Text Available We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions-which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be 'invaded' by a newcomer third party very quickly, while the second remains immune to such invasion.

  13. Quantitative MR application in depression model of rats: a preliminary study

    Institute of Scientific and Technical Information of China (English)

    Wei Wang; Wenxun Li; Fang Fang; Hao Lei; Xiaoping Yin; Jianpin Qi; Baiseng Wang; Chengyuan Wang

    2005-01-01

    Objective: To investigate findings and value of quantitative MR in depression model of rats. Methods: Twenty male SD rats were divided into model group and control group randomly (10 rats in each group). The depression model of rats was erected by separation and chronic unpredictable stress. The behavior of rat was detected by open-field test and sucrose consumption. The MR images of brain tissues were produced in vivo rats with T2-and diffusion-weighted imaging. The changes of body weight and behavior score and thevalues of T2 and ADC of ROIs were compared between the two groups. Histological verification of hippocampal neuron damage was alsoperformed under ultramicrosopy. Results: Compared with the control group, T2 values in hippocampus prolonged 5.5 % ( P < 0.05),ADC values in hippocampus and in temporal lobe cortex decreased 11.7 % and 10.9% (P < 0.01)respectively in the model group. Histo-logic data confirmed severe neuronal damage in the hippocampus of the model group. Conclusion: This study capitalized on diffusion-weighted imaging as a sensitive technique for the identification of neuronal damage in depression and it provides an experimental evidence ofMRI in depression investigation and clinical application.

  14. Tree Root System Characterization and Volume Estimation by Terrestrial Laser Scanning and Quantitative Structure Modeling

    Directory of Open Access Journals (Sweden)

    Aaron Smith

    2014-12-01

    Full Text Available The accurate characterization of three-dimensional (3D root architecture, volume, and biomass is important for a wide variety of applications in forest ecology and to better understand tree and soil stability. Technological advancements have led to increasingly more digitized and automated procedures, which have been used to more accurately and quickly describe the 3D structure of root systems. Terrestrial laser scanners (TLS have successfully been used to describe aboveground structures of individual trees and stand structure, but have only recently been applied to the 3D characterization of whole root systems. In this study, 13 recently harvested Norway spruce root systems were mechanically pulled from the soil, cleaned, and their volumes were measured by displacement. The root systems were suspended, scanned with TLS from three different angles, and the root surfaces from the co-registered point clouds were modeled with the 3D Quantitative Structure Model to determine root architecture and volume. The modeling procedure facilitated the rapid derivation of root volume, diameters, break point diameters, linear root length, cumulative percentages, and root fraction counts. The modeled root systems underestimated root system volume by 4.4%. The modeling procedure is widely applicable and easily adapted to derive other important topological and volumetric root variables.

  15. A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait

    Directory of Open Access Journals (Sweden)

    Damgaard Lars

    2005-12-01

    Full Text Available Abstract With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The twoWeibull baseline parameters were updated jointly using a Metropolis-Hastingstep. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.

  16. A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait.

    Science.gov (United States)

    Damgaard, Lars Holm; Korsgaard, Inge Riis

    2006-01-01

    With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The two Weibull baseline parameters were updated jointly using a Metropolis-Hasting step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.

  17. Antiproliferative Pt(IV) complexes: synthesis, biological activity, and quantitative structure-activity relationship modeling.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico

    2010-09-01

    Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.

  18. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    Science.gov (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  19. An experience model for anyspeed motion

    CERN Document Server

    Fraundorf, P B

    2001-01-01

    Simple airtrack simulations, like those now possible with web-based 3D environments, can provide explorers of any age with experiential data sufficient to formulate their own models of motion. In particular, with a compressable spring, two gliders, a moving clock and two gate pairs with timers (pre and post collision), Newton's laws (or one's own version thereof) may emerge (or be tested) in the lab. With a high speed simulation, one may find instead Minkowski's spacetime version of Pythagoras' theorem (the metric equation) in the data, along with ``anyspeed'' expressions for momentum and kinetic energy.

  20. Rapid Method Development in Hydrophilic Interaction Liquid Chromatography for Pharmaceutical Analysis Using a Combination of Quantitative Structure-Retention Relationships and Design of Experiments.

    Science.gov (United States)

    Taraji, Maryam; Haddad, Paul R; Amos, Ruth I J; Talebi, Mohammad; Szucs, Roman; Dolan, John W; Pohl, Chris A

    2017-02-07

    A design-of-experiment (DoE) model was developed, able to describe the retention times of a mixture of pharmaceutical compounds in hydrophilic interaction liquid chromatography (HILIC) under all possible combinations of acetonitrile content, salt concentration, and mobile-phase pH with R(2) > 0.95. Further, a quantitative structure-retention relationship (QSRR) model was developed to predict retention times for new analytes, based only on their chemical structures, with a root-mean-square error of prediction (RMSEP) as low as 0.81%. A compound classification based on the concept of similarity was applied prior to QSRR modeling. Finally, we utilized a combined QSRR-DoE approach to propose an optimal design space in a quality-by-design (QbD) workflow to facilitate the HILIC method development. The mathematical QSRR-DoE model was shown to be highly predictive when applied to an independent test set of unseen compounds in unseen conditions with a RMSEP value of 5.83%. The QSRR-DoE computed retention time of pharmaceutical test analytes and subsequently calculated separation selectivity was used to optimize the chromatographic conditions for efficient separation of targets. A Monte Carlo simulation was performed to evaluate the risk of uncertainty in the model's prediction, and to define the design space where the desired quality criterion was met. Experimental realization of peak selectivity between targets under the selected optimal working conditions confirmed the theoretical predictions. These results demonstrate how discovery of optimal conditions for the separation of new analytes can be accelerated by the use of appropriate theoretical tools.

  1. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    Science.gov (United States)

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering

    Science.gov (United States)

    2014-06-01

    model of the system (Friendenthal, Moore and Steiner 2008, 17). The premise is that maintaining a logical and consistent model can be accomplished...Standard for Exchange of Product data (STEP) subgroup of ISO, and defines a standard data format for certain types of SE information ( Johnson 2006...search.credoreference.com/content/entry/encyccs/formal_languages/0. Friedenthal, Sanford, Alan Moore, and Rick Steiner . 2008. A Practical Guide to SysML

  3. Experiments and Valve Modelling in Thermoacoustic Device

    Science.gov (United States)

    Duthil, P.; Baltean Carlès, D.; Bétrancourt, A.; François, M. X.; Yu, Z. B.; Thermeau, J. P.

    2006-04-01

    In a so called heat driven thermoacoustic refrigerator, using either a pulse tube or a lumped boost configuration, heat pumping is induced by Stirling type thermodynamic cycles within the regenerator. The time phase between acoustic pressure and flow rate throughout must then be close to that met for a purely progressive wave. The study presented here relates the experimental characterization of passive elements such as valves, tubes and tanks which are likely to act on this phase relationship when included in the propagation line of the wave resonator. In order to carry out a characterization — from the acoustic point of view — of these elements, systematic measurements of the acoustic field are performed varying various parameters: mean pressure, oscillations frequency, supplied heat power. Acoustic waves are indeed generated by use of a thermoacoustic prime mover driving a pulse tube refrigerator. The experimental results are then compared with the solutions obtained with various one-dimensional linear models including non linear correction factors. It turns out that when using a non symmetrical valve, and for large dissipative effects, the measurements disagree with the linear modelling and non linear behaviour of this particular element is shown.

  4. Indian Consortia Models: FORSA Libraries' Experiences

    Science.gov (United States)

    Patil, Y. M.; Birdie, C.; Bawdekar, N.; Barve, S.; Anilkumar, N.

    2007-10-01

    With increases in prices of journals, shrinking library budgets and cuts in subscriptions to journals over the years, there has been a big challenge facing Indian library professionals to cope with the proliferation of electronic information resources. There have been sporadic efforts by different groups of libraries in forming consortia at different levels. The types of consortia identified are generally based on various models evolved in India in a variety of forms depending upon the participants' affiliations and funding sources. Indian astronomy library professionals have formed a group called Forum for Resource Sharing in Astronomy and Astrophysics (FORSA), which falls under `Open Consortia', wherein participants are affiliated to different government departments. This is a model where professionals willingly come forward and actively support consortia formation; thereby everyone benefits. As such, FORSA has realized four consortia, viz. Nature Online Consortium; Indian Astrophysics Consortium for physics/astronomy journals of Springer/Kluwer; Consortium for Scientific American Online Archive (EBSCO); and Open Consortium for Lecture Notes in Physics (Springer), which are discussed briefly.

  5. The OECI model: the CRO Aviano experience.

    Science.gov (United States)

    Da Pieve, Lucia; Collazzo, Raffaele; Masutti, Monica; De Paoli, Paolo

    2015-01-01

    In 2012, the "Centro di Riferimento Oncologico" (CRO) National Cancer Institute joined the accreditation program of the Organisation of European Cancer Institutes (OECI) and was one of the first institutes in Italy to receive recognition as a Comprehensive Cancer Center. At the end of the project, a strengths, weaknesses, opportunities, and threats (SWOT) analysis aimed at identifying the pros and cons, both for the institute and of the accreditation model in general, was performed. The analysis shows significant strengths, such as the affinity with other improvement systems and current regulations, and the focus on a multidisciplinary approach. The proposed suggestions for improvement concern mainly the structure of the standards and aim to facilitate the assessment, benchmarking, and sharing of best practices. The OECI accreditation model provided a valuable executive tool and a framework in which we can identify several important development projects. An additional impact for our institute is the participation in the project BenchCan, of which the OECI is lead partner.

  6. MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments

    NARCIS (Netherlands)

    Bustin, S.A.; Beaulieu, J.F.; Huggett, J.; Jaggi, R.; Kibenge, F.S.; Olsvik, P.A.; Penning, L.C.; Toegel, S.

    2010-01-01

    MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments Stephen A Bustin1 , Jean-François Beaulieu2 , Jim Huggett3 , Rolf Jaggi4 , Frederick SB Kibenge5 , Pål A Olsvik6 , Louis C Penning7 and Stefan Toegel8 1 Centre for Diges

  7. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  8. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper;

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  9. Antiangiogenic effects of pazopanib in xenograft hepatocellular carcinoma models: evaluation by quantitative contrast-enhanced ultrasonography

    Directory of Open Access Journals (Sweden)

    Wu Wei-Zhong

    2011-01-01

    Full Text Available Abstract Background Antiangiogenesis is a promising therapy for advanced hepatocellular carcinoma (HCC, but the effects are difficult to be evaluated. Pazopanib (GW786034B is a pan-vascular endothelial growth factor receptor inhibitor, the antitumor effects or antiangiogenic effects haven't been investigated in HCC. Methods In vitro direct effects of pazopanib on human HCC cell lines and endothelial cells were evaluated. In vivo antitumor effects were evaluated in three xenograft nude mice models. In the subcutaneous HCCLM3 model, intratumoral blood perfusion was detected by contrast-enhanced ultrasonography (CEUS, and serial quantitative parameters were profiled from the time-intensity curves of ultrasonograms. Results In vitro proliferation of various HCC cell lines were not inhibited by pazopanib. Pazopanib inhibited migration and invasion and induced apoptosis significantly in two HCC cell lines, HCCLM3 and PLC/PRF/5. Proliferation, migration, and tubule formation of human umbilical vein endothelial cells were inhibited by pazopanib in a dose-dependent manner. In vivo tumor growth was significantly inhibited by pazopanib in HCCLM3, HepG2, and PLC/PRF/5 xenograft models. Various intratumoral perfusion parameters changed over time, and the signal intensity was significantly impaired in the treated tumors before the treatment efficacy on tumor size could be observed. Mean transit time of the contrast media in hotspot areas of the tumors was reversely correlated with intratumoral microvessel density. Conclusions Antitumor effects of pazopanib in HCC xenografts may owe to its antiangiogenic effects, and the in vivo antiangiogenic effects could be evaluated by quantitative CEUS.

  10. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  11. Mooring Model Experiment and Mooring Line Force Calculation

    Institute of Scientific and Technical Information of China (English)

    向溢; 谭家华; 杨建民; 张承懿

    2001-01-01

    Mooring model experiment and mooring line tension determination are of significance to the design of mooring systems and berthing structures. This paper mainly involves: (a) description and analysis of a mooring model experiment;(b) derivation of static equilibrium equations for a moored ship subjected to wind, current and waves; (c) solution of mo.oring equations with the Monte Carlo method; (d) qualitative analysis of effects of pier piles on mooring line forces. Special emphasis is placed on the derivation ofstatic equilibrium equations, solution method and the mooring model experiment.

  12. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  13. Climate change and dengue: a critical and systematic review of quantitative modelling approaches.

    Science.gov (United States)

    Naish, Suchithra; Dale, Pat; Mackenzie, John S; McBride, John; Mengersen, Kerrie; Tong, Shilu

    2014-03-26

    Many studies have found associations between climatic conditions and dengue transmission. However, there is a debate about the future impacts of climate change on dengue transmission. This paper reviewed epidemiological evidence on the relationship between climate and dengue with a focus on quantitative methods for assessing the potential impacts of climate change on global dengue transmission. A literature search was conducted in October 2012, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search focused on peer-reviewed journal articles published in English from January 1991 through October 2012. Sixteen studies met the inclusion criteria and most studies showed that the transmission of dengue is highly sensitive to climatic conditions, especially temperature, rainfall and relative humidity. Studies on the potential impacts of climate change on dengue indicate increased climatic suitability for transmission and an expansion of the geographic regions at risk during this century. A variety of quantitative modelling approaches were used in the studies. Several key methodological issues and current knowledge gaps were identified through this review. It is important to assemble spatio-temporal patterns of dengue transmission compatible with long-term data on climate and other socio-ecological changes and this would advance projections of dengue risks associated with climate change.

  14. The effects of nutrition labeling on consumer food choice: a psychological experiment and computational model.

    Science.gov (United States)

    Helfer, Peter; Shultz, Thomas R

    2014-12-01

    The widespread availability of calorie-dense food is believed to be a contributing cause of an epidemic of obesity and associated diseases throughout the world. One possible countermeasure is to empower consumers to make healthier food choices with useful nutrition labeling. An important part of this endeavor is to determine the usability of existing and proposed labeling schemes. Here, we report an experiment on how four different labeling schemes affect the speed and nutritional value of food choices. We then apply decision field theory, a leading computational model of human decision making, to simulate the experimental results. The psychology experiment shows that quantitative, single-attribute labeling schemes have greater usability than multiattribute and binary ones, and that they remain effective under moderate time pressure. The computational model simulates these psychological results and provides explanatory insights into them. This work shows how experimental psychology and computational modeling can contribute to the evaluation and improvement of nutrition-labeling schemes.

  15. Hydrodynamics of Explosion Experiments and Models

    CERN Document Server

    Kedrinskii, Valery K

    2005-01-01

    Hydronamics of Explosion presents the research results for the problems of underwater explosions and contains a detailed analysis of the structure and the parameters of the wave fields generated by explosions of cord and spiral charges, a description of the formation mechanisms for a wide range of cumulative flows at underwater explosions near the free surface, and the relevant mathematical models. Shock-wave transformation in bubbly liquids, shock-wave amplification due to collision and focusing, and the formation of bubble detonation waves in reactive bubbly liquids are studied in detail. Particular emphasis is placed on the investigation of wave processes in cavitating liquids, which incorporates the concepts of the strength of real liquids containing natural microinhomogeneities, the relaxation of tensile stress, and the cavitation fracture of a liquid as the inversion of its two-phase state under impulsive (explosive) loading. The problems are classed among essentially nonlinear processes that occur unde...

  16. Model building experiences using Garp3: problems, patterns and debugging

    NARCIS (Netherlands)

    Liem, J.; Linnebank, F.E.; Bredeweg, B.; Žabkar, J.; Bratko, I.

    2009-01-01

    Capturing conceptual knowledge in QR models is becoming of interest to a larger audience of domain experts. Consequently, we have been training several groups to effectively create QR models during the last few years. In this paper we describe our teaching experiences, the issues the modellers encou

  17. "ABC's Earthquake" (Experiments and models in seismology)

    Science.gov (United States)

    Almeida, Ana

    2017-04-01

    Ana Almeida, Portugal Almeida, Ana Escola Básica e Secundária Dr. Vieira de Carvalho Moreira da Maia, Portugal The purpose of this presentation, in poster format, is to disclose an activity which was planned and made by me, in a school on the north of Portugal, using a kit of materials simple and easy to use - the sismo-box. The activity "ABC's Earthquake" was developed under the discipline of Natural Sciences, with students from 7th grade, geosciences teachers and other areas. The possibility of work with the sismo-box was seen as an exciting and promising opportunity to promote science, seismology more specifically, to do science, when using the existing models in the box and with them implement the scientific method, to work and consolidate content and skills in the area of Natural Sciences, to have a time of sharing these materials with classmates, and also with other teachers from the different areas. Throughout the development of the activity, either with students or teachers, it was possible to see the admiration by the models presented in the earthquake-box, as well as, the interest and the enthusiasm in wanting to move and understand what the results after the proposed procedure in the script. With this activity, we managed to promote: - educational success in this subject; a "school culture" with active participation, with quality, rules, discipline and citizenship values; fully integration of students with special educational needs; strengthen the performance of the school as a cultural, informational and formation institution; provide activities to date and innovative; foment knowledge "to be, being and doing" and contribute to a moment of joy and discovery.Learn by doing!

  18. Comparing Simulation Output Accuracy of Discrete Event and Agent Based Models: A Quantitive Approach

    CERN Document Server

    Majid, Mazlina Abdul; Siebers, Peer-Olaf

    2010-01-01

    In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methids. In a second step a multi-scenario experimen...

  19. Characteristics of the Nordic Seas overflows in a set of Norwegian Earth System Model experiments

    Science.gov (United States)

    Guo, Chuncheng; Ilicak, Mehmet; Bentsen, Mats; Fer, Ilker

    2016-08-01

    Global ocean models with an isopycnic vertical coordinate are advantageous in representing overflows, as they do not suffer from topography-induced spurious numerical mixing commonly seen in geopotential coordinate models. In this paper, we present a quantitative diagnosis of the Nordic Seas overflows in four configurations of the Norwegian Earth System Model (NorESM) family that features an isopycnic ocean model. For intercomparison, two coupled ocean-sea ice and two fully coupled (atmosphere-land-ocean-sea ice) experiments are considered. Each pair consists of a (non-eddying) 1° and a (eddy-permitting) 1/4° horizontal resolution ocean model. In all experiments, overflow waters remain dense and descend to the deep basins, entraining ambient water en route. Results from the 1/4° pair show similar behavior in the overflows, whereas the 1° pair show distinct differences, including temperature/salinity properties, volume transport (Q), and large scale features such as the strength of the Atlantic Meridional Overturning Circulation (AMOC). The volume transport of the overflows and degree of entrainment are underestimated in the 1° experiments, whereas in the 1/4° experiments, there is a two-fold downstream increase in Q, which matches observations well. In contrast to the 1/4° experiments, the coarse 1° experiments do not capture the inclined isopycnals of the overflows or the western boundary current off the Flemish Cap. In all experiments, the pathway of the Iceland-Scotland Overflow Water is misrepresented: a major fraction of the overflow proceeds southward into the West European Basin, instead of turning westward into the Irminger Sea. This discrepancy is attributed to excessive production of Labrador Sea Water in the model. The mean state and variability of the Nordic Seas overflows have significant consequences on the response of the AMOC, hence their correct representations are of vital importance in global ocean and climate modelling.

  20. FIELD EXPERIMENTS AND MODELING AT CDG AIRPORTS

    Science.gov (United States)

    Ramaroson, R.

    2009-12-01

    Richard Ramaroson1,4, Klaus Schaefer2, Stefan Emeis2, Carsten Jahn2, Gregor Schürmann2, Maria Hoffmann2, Mikhael Zatevakhin3, Alexandre Ignatyev3. 1ONERA, Châtillon, France; 4SEAS, Harvard University, Cambridge, USA; 2FZK, Garmisch, Germany; (3)FSUE SPbAEP, St Petersburg, Russia. 2-month field campaigns have been organized at CDG airports in autumn 2004 and summer 2005. Air quality and ground air traffic emissions have been monitored continuously at terminals and taxi-runways, along with meteorological parameters onboard trucks and with a SODAR. This paper analyses the commercial engine emissions characteristics at airports and their effects on gas pollutants and airborne particles coupled to meteorology. LES model results for PM dispersion coupled to microphysics in the PBL are compared to measurements. Winds and temperature at the surface and their vertical profiles have been stored with turbulence. SODAR observations show the time-development of the mixing layer depth and turbulent mixing in summer up to 800m. Active low level jets and their regional extent have been observed and analyzed. PM number and mass size distribution, morphology and chemical contents are investigated. Formation of new ultra fine volatile (UFV) particles in the ambient plume downstream of running engines is observed. Soot particles are mostly observed at significant level at high power thrusts at take-off (TO) and on touch-down whereas at lower thrusts at taxi and aprons ultra the UFV PM emissions become higher. Ambient airborne PM1/2.5 is closely correlated to air traffic volume and shows a maximum beside runways. PM number distribution at airports is composed mainly by volatile UF PM abundant at apron. Ambient PM mass in autumn is higher than in summer. The expected differences between TO and taxi emissions are confirmed for NO, NO2, speciated VOC and CO. NO/NO2 emissions are larger at runways due to higher power. Reactive VOC and CO are more produced at low powers during idling at

  1. Quantitative Models of the Dose-Response and Time Course of Inhalational Anthrax in Humans

    Science.gov (United States)

    Schell, Wiley A.; Bulmahn, Kenneth; Walton, Thomas E.; Woods, Christopher W.; Coghill, Catherine; Gallegos, Frank; Samore, Matthew H.; Adler, Frederick R.

    2013-01-01

    Anthrax poses a community health risk due to accidental or intentional aerosol release. Reliable quantitative dose-response analyses are required to estimate the magnitude and timeline of potential consequences and the effect of public health intervention strategies under specific scenarios. Analyses of available data from exposures and infections of humans and non-human primates are often contradictory. We review existing quantitative inhalational anthrax dose-response models in light of criteria we propose for a model to be useful and defensible. To satisfy these criteria, we extend an existing mechanistic competing-risks model to create a novel Exposure–Infection–Symptomatic illness–Death (EISD) model and use experimental non-human primate data and human epidemiological data to optimize parameter values. The best fit to these data leads to estimates of a dose leading to infection in 50% of susceptible humans (ID50) of 11,000 spores (95% confidence interval 7,200–17,000), ID10 of 1,700 (1,100–2,600), and ID1 of 160 (100–250). These estimates suggest that use of a threshold to human infection of 600 spores (as suggested in the literature) underestimates the infectivity of low doses, while an existing estimate of a 1% infection rate for a single spore overestimates low dose infectivity. We estimate the median time from exposure to onset of symptoms (incubation period) among untreated cases to be 9.9 days (7.7–13.1) for exposure to ID50, 11.8 days (9.5–15.0) for ID10, and 12.1 days (9.9–15.3) for ID1. Our model is the first to provide incubation period estimates that are independently consistent with data from the largest known human outbreak. This model refines previous estimates of the distribution of early onset cases after a release and provides support for the recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses. PMID:24058320

  2. Bayesian model choice and search strategies for mapping interacting quantitative trait Loci.

    Science.gov (United States)

    Yi, Nengjun; Xu, Shizhong; Allison, David B

    2003-01-01

    Most complex traits of animals, plants, and humans are influenced by multiple genetic and environmental factors. Interactions among multiple genes play fundamental roles in the genetic control and evolution of complex traits. Statistical modeling of interaction effects in quantitative trait loci (QTL) analysis must accommodate a very large number of potential genetic effects, which presents a major challenge to determining the genetic model with respect to the number of QTL, their positions, and their genetic effects. In this study, we use the methodology of Bayesian model and variable selection to develop strategies for identifying multiple QTL with complex epistatic patterns in experimental designs with two segregating genotypes. Specifically, we develop a reversible jump Markov chain Monte Carlo algorithm to determine the number of QTL and to select main and epistatic effects. With the proposed method, we can jointly infer the genetic model of a complex trait and the associated genetic parameters, including the number, positions, and main and epistatic effects of the identified QTL. Our method can map a large number of QTL with any combination of main and epistatic effects. Utility and flexibility of the method are demonstrated using both simulated data and a real data set. Sensitivity of posterior inference to prior specifications of the number and genetic effects of QTL is investigated. PMID:14573494

  3. Investigation and prediction of protein precipitation by polyethylene glycol using quantitative structure-activity relationship models.

    Science.gov (United States)

    Hämmerling, Frank; Ladd Effio, Christopher; Andris, Sebastian; Kittelmann, Jörg; Hubbuch, Jürgen

    2017-01-10

    Precipitation of proteins is considered to be an effective purification method for proteins and has proven its potential to replace costly chromatography processes. Besides salts and polyelectrolytes, polymers, such as polyethylene glycol (PEG), are commonly used for precipitation applications under mild conditions. Process development, however, for protein precipitation steps still is based mainly on heuristic approaches and high-throughput experimentation due to a lack of understanding of the underlying mechanisms. In this work we apply quantitative structure-activity relationships (QSARs) to model two parameters, the discontinuity point m* and the β-value, that describe the complete precipitation curve of a protein under defined conditions. The generated QSAR models are sensitive to the protein type, pH, and ionic strength. It was found that the discontinuity point m* is mainly dependent on protein molecular structure properties and electrostatic surface properties, whereas the β-value is influenced by the variance in electrostatics and hydrophobicity on the protein surface. The models for m* and the β-value exhibit a good correlation between observed and predicted data with a coefficient of determination of R(2)≥0.90 and, hence, are able to accurately predict precipitation curves for proteins. The predictive capabilities were demonstrated for a set of combinations of protein type, pH, and ionic strength not included in the generation of the models and good agreement between predicted and experimental data was achieved.

  4. Quantitative structure-property relationship modeling of Grätzel solar cell dyes.

    Science.gov (United States)

    Venkatraman, Vishwesh; Åstrand, Per-Olof; Alsberg, Bjørn Kåre

    2014-01-30

    With fossil fuel reserves on the decline, there is increasing focus on the design and development of low-cost organic photovoltaic devices, in particular, dye-sensitized solar cells (DSSCs). The power conversion efficiency (PCE) of a DSSC is heavily influenced by the chemical structure of the dye. However, as far as we know, no predictive quantitative structure-property relationship models for DSSCs with PCE as one of the response variables have been reported. Thus, we report for the first time the successful application of comparative molecular field analysis (CoMFA) and vibrational frequency-based eigenvalue (EVA) descriptors to model molecular structure-photovoltaic performance relationships for a set of 40 coumarin derivatives. The results show that the models obtained provide statistically robust predictions of important photovoltaic parameters such as PCE, the open-circuit voltage (V(OC)), short-circuit current (J(SC)) and the peak absorption wavelength λ(max). Some of our findings based on the analysis of the models are in accordance with those reported in the literature. These structure-property relationships can be applied to the rational structural design and evaluation of new photovoltaic materials.

  5. Nonlinear quantitative radiation sensitivity prediction model based on NCI-60 cancer cell lines.

    Science.gov (United States)

    Zhang, Chunying; Girard, Luc; Das, Amit; Chen, Sun; Zheng, Guangqiang; Song, Kai

    2014-01-01

    We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT) related genes were selected by significance analysis of microarrays (SAM). Orthogonal latent variables (LVs) were then extracted by the partial least squares (PLS) method as the new compressive input variables. Finally, support vector machine (SVM) regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray) values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a) reducing the root mean square error (RMSE) of the radiation sensitivity prediction model from 0.20 to 0.011; and (b) improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  6. Nonlinear Quantitative Radiation Sensitivity Prediction Model Based on NCI-60 Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Chunying Zhang

    2014-01-01

    Full Text Available We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT related genes were selected by significance analysis of microarrays (SAM. Orthogonal latent variables (LVs were then extracted by the partial least squares (PLS method as the new compressive input variables. Finally, support vector machine (SVM regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a reducing the root mean square error (RMSE of the radiation sensitivity prediction model from 0.20 to 0.011; and (b improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  7. The JBEI quantitative metabolic modeling library (jQMM): a python library for modeling microbial metabolism

    DEFF Research Database (Denmark)

    Birkel, Garrett W.; Ghosh, Amit; Kumar, Vinay S.

    2017-01-01

    analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed.Results: The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes...

  8. Using the ACT-R architecture to specify 39 quantitative process models of decision making

    Directory of Open Access Journals (Sweden)

    Julian N. Marewski

    2011-08-01

    Full Text Available Hypotheses about decision processes are often formulated qualitatively and remain silent about the interplay of decision, memorial, and other cognitive processes. At the same time, existing decision models are specified at varying levels of detail, making it difficult to compare them. We provide a methodological primer on how detailed cognitive architectures such as ACT-R allow remedying these problems. To make our point, we address a controversy, namely, whether noncompensatory or compensatory processes better describe how people make decisions from the accessibility of memories. We specify 39 models of accessibility-based decision processes in ACT-R, including the noncompensatory recognition heuristic and various other popular noncompensatory and compensatory decision models. Additionally, to illustrate how such models can be tested, we conduct a model comparison, fitting the models to one experiment and letting them generalize to another. Behavioral data are best accounted for by race models. These race models embody the noncompensatory recognition heuristic and compensatory models as a race between competing processes, dissolving the dichotomy between existing decision models.

  9. USAGE OF INTERVAL CAUSE-EFFECT RELATIONSHIP COEFFICIENTS IN THE QUANTITATIVE MODEL OF STRATEGIC PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Dmitry M. Yershov

    2012-12-01

    Full Text Available This paper proposes the method to obtain values of the coefficients of cause-effect relationships between strategic objectives in the form of intervals and use them in solving the problem of the optimal allocation of organization’s resources. We suggest taking advantage of the interval analytical hierarchy process for obtaining the ntervals. The quantitative model of strategic performance developed by M. Hell, S. Vidučić and Ž. Garača is employed for finding the optimal resource allocation. The uncertainty originated in the optimization problem as a result of interval character of the cause-effect relationship coefficients is eliminated through the application of maximax and maximin criteria. It is shown that the problem of finding the optimal maximin, maximax, and compromise resource allocation can be represented as a mixed 0-1 linear programming problem. Finally, numerical example and directions for further research are given.

  10. Nanoindentation shape effect: experiments, simulations and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Calabri, L [CNR-INFM-National Research Center on nanoStructures and bioSystems at Surfaces (S3), Via Campi 213/a, 41100 Modena (Italy); Pugno, N [Department of Structural Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin (Italy); Rota, A [CNR-INFM-National Research Center on nanoStructures and bioSystems at Surfaces (S3), Via Campi 213/a, 41100 Modena (Italy); Marchetto, D [CNR-INFM-National Research Center on nanoStructures and bioSystems at Surfaces (S3), Via Campi 213/a, 41100 Modena (Italy); Valeri, S [CNR-INFM-National Research Center on nanoStructures and bioSystems at Surfaces (S3), Via Campi 213/a, 41100 Modena (Italy)

    2007-10-03

    AFM nanoindentation is nowadays commonly used for the study of mechanical properties of materials at the nanoscale. The investigation of surface hardness of a material using AFM means that the probe has to be able to indent the surface, but also to image it. Usually standard indenters are not sharp enough to obtain high-resolution images, but on the other hand measuring the hardness behaviour of a material with a non-standard sharp indenter gives only comparative results affected by a significant deviation from the commonly used hardness scales. In this paper we try to understand how the shape of the indenter affects the hardness measurement, in order to find a relationship between the measured hardness of a material and the corner angle of a pyramidal indenter. To achieve this we performed a full experimental campaign, indenting the same material with three focused ion beam (FIB) nanofabricated probes with a highly altered corner angle. We then compared the results obtained experimentally with those obtained by numerical simulations, using the finite element method (FEM), and by theoretical models, using a general scaling law for nanoindentation available for indenters with a variable size and shape. The comparison between these three approaches (experimental, numerical and theoretical approaches) reveals a good agreement and allowed us to find a theoretical relationship which links the measured hardness value with the shape of the indenter. The same theoretical approach has also been used to fit the hardness experimental results considering the indentation size effect. In this case we compare the measured data, changing the applied load.

  11. The Aqueous Alteration of CR Chondrites: Experiments and Geochemical Modeling

    Science.gov (United States)

    Perronnet, M.; Berger, G.; Zolensky, M. E.; Toplis, M. J.; Kolb, V. M.; Bajagic, M.

    2007-03-01

    Laboratory alteration experiments were performed on mineralogical assemblages having the unaltered CR composition. The mineralogy of reaction products was compared to that of Renazzo and GRO 95577 and to predictions of geochemical modeling.

  12. A Community Mentoring Model for STEM Undergraduate Research Experiences

    Science.gov (United States)

    Kobulnicky, Henry A.; Dale, Daniel A.

    2016-01-01

    This article describes a community mentoring model for UREs that avoids some of the common pitfalls of the traditional paradigm while harnessing the power of learning communities to provide young scholars a stimulating collaborative STEM research experience.

  13. Lattice Boltzmann modeling of directional wetting: Comparing simulations to experiments

    NARCIS (Netherlands)

    Jansen, H.P.; Sotthewes, K.; Swigchem, van J.; Zandvliet, H.J.W.; Kooij, E.S.

    2013-01-01

    Lattice Boltzmann Modeling (LBM) simulations were performed on the dynamic behavior of liquid droplets on chemically striped patterned surfaces, ultimately with the aim to develop a predictive tool enabling reliable design of future experiments. The simulations accurately mimic experimental results,

  14. Wind Tunnel Experiments with Active Control of Bridge Section Model

    DEFF Research Database (Denmark)

    Hansen, Henriette I.; Thoft-Christensen, Palle

    the flutter wind velocity for future ultra-long span suspension bridges. The purpose of the wind tunnel experiments is to investigate the principle to use this active flap control system. The bridge section model used in the experiments is therefore not a model of a specific bridge but it is realistic......This paper describes results of wind tunnel experiments with a bridge section model where movable flaps are integrated in the bridge girder so each flap is the streamlined part of the edge of the girder. This active control flap system is patented by COWIconsult and may be used to increase...... compared with a real bridge. Five flap configurations are investigated during the wind tunnel experiments and depending on the actual flap configuration it is possible to decrease or increase the flutter wind velocity for the model....

  15. A quantitative model for the in vivo assessment of drug binding sites with positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Mintun, M.A.; Raichle, M.E.; Kilbourn, M.R.; Wooten, G.F.; Welch, M.J.

    1984-03-01

    We propose an in vivo method for use with positron emission tomography (PET) that results in a quantitative characterization of neuroleptic binding sites using radiolabeled spiperone. The data are analyzed using a mathematical model that describes transport, nonspecific binding, and specific binding in the brain. The model demonstrates that the receptor quantities Bmax (i.e., the number of binding sites) and KD-1 (i.e., the binding affinity) are not separably ascertainable with tracer methodology in human subjects. We have, therefore, introduced a new term, the binding potential, equivalent to the product BmaxKD-1, which reflects the capacity of a given tissue, or region of a tissue, for ligand-binding site interaction. The procedure for obtaining these measurements is illustrated with data from sequential PET scans of baboons after intravenous injection of carrier-added (18F)spiperone. From these data we estimate the brain tissue nonspecific binding of spiperone to be in the range of 94.2 to 95.3%, and the regional brain spiperone permeability (measured as the permeability-surface area product) to be in the range of 0.025 to 0.036 cm3/(s X ml). The binding potential of the striatum ranged from 17.4 to 21.6; these in vivo estimates compare favorably to in vitro values in the literature. To our knowledge this represents the first direct evidence that PET can be used to characterize quantitatively, locally and in vivo, drug binding sites in brain. The ability to make such measurements with PET should permit the detailed investigation of diseases thought to result from disorders of receptor function.

  16. Authorship of scientific articles within an ethical-legal framework: quantitative model

    Directory of Open Access Journals (Sweden)

    Martha Y. Vallejo

    2012-12-01

    Full Text Available Determining authorship and the order of authorship in scientific papers, in modern interdisciplinary and interinstitutional science, has become complex at a legal and ethical level. Failure to define authorship before or during the research, creates subsequent problems for those considered authors of a publication or lead authors of a work, particularly so, once the project or manuscript is completed. This article proposes a quantitative and qualitative model to determine authorship within a scientific, ethical and legal frame. The principles used for the construction of this design are based on 2 criteria: a stages of research and scientific method involving: 1. Planning and development of the research project, 2. Design and data collection, 3. Presentation of results, 4. Interpretation of results, 5. Manuscript preparation to disseminate new knowledge to the scientific community, 6. Administration and management, and b weighting coefficients in each phase, to decide on authorship and ownership of the work. The model also considers and distinguishes whether the level and activity performed during the creation of the work and the diffusion of knowledge is an intellectual or practical contribution; this distinction both contrasts and complements the elements protected by copyright laws. The format can be applied a priori and a posteriori to the completion of a project or manuscript and can conform to any research and publication. The use of this format will quantitatively resolve: 1. The order of authorship (first author and co-author order, 2. Determine the inclusion and exclusion of contributors, taking into account ethical and legal principles, and 3. Percentages of economic rights for each authors.

  17. A user experience model for tangible interfaces for children

    NARCIS (Netherlands)

    Reidsma, Dennis; van Dijk, Elisabeth M.A.G.; van der Sluis, Frans; Volpe, G; Camurri, A.; Perloy, L.M.; Nijholt, Antinus

    2015-01-01

    Tangible user interfaces allow children to take advantage of their experience in the real world when interacting with digital information. In this paper we describe a model for tangible user interfaces specifically for children that focuses mainly on the user experience during interaction and on how

  18. Quantitative approaches in developmental biology.

    Science.gov (United States)

    Oates, Andrew C; Gorfinkiel, Nicole; González-Gaitán, Marcos; Heisenberg, Carl-Philipp

    2009-08-01

    The tissues of a developing embryo are simultaneously patterned, moved and differentiated according to an exchange of information between their constituent cells. We argue that these complex self-organizing phenomena can only be fully understood with quantitative mathematical frameworks that allow specific hypotheses to be formulated and tested. The quantitative and dynamic imaging of growing embryos at the molecular, cellular and tissue level is the key experimental advance required to achieve this interaction between theory and experiment. Here we describe how mathematical modelling has become an invaluable method to integrate quantitative biological information across temporal and spatial scales, serving to connect the activity of regulatory molecules with the morphological development of organisms.

  19. Capillary Discharge Thruster Experiments and Modeling (Briefing Charts)

    Science.gov (United States)

    2016-06-01

    PROPULSION MODELS & EXPERIMENTS Spacecraft Propulsion Relevant Plasma: From hall thrusters to plumes and fluxes on components Complex reaction physics i.e...PROPULSION MODELS & EXPERIMENTS Spacecraft Propulsion Relevant Plasma: From hall thrusters to plumes and fluxes on components Complex reaction ...Conductivity h is the Enthalpy Cs is the Sound Speed Θ is the Wall Energy Flux Pekker, 40th AIAA Plasmadynamics and Laser Conference, 2009. R.S. MARTIN (ERC INC

  20. Engineering teacher training models and experiences

    Science.gov (United States)

    González-Tirados, R. M.

    2009-04-01

    Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.

  1. Effect of arterial deprivation on growing femoral epiphysis: Quantitative magnetic resonance imaging using a piglet model

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Jung Eun; Yoo, Won Joon; Kim, In One; Kim, Woo Sun; Choi, Young Hun [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2015-06-15

    To investigate the usefulness of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion MRI for the evaluation of femoral head ischemia. Unilateral femoral head ischemia was induced by selective embolization of the medial circumflex femoral artery in 10 piglets. All MRIs were performed immediately (1 hour) and after embolization (1, 2, and 4 weeks). Apparent diffusion coefficients (ADCs) were calculated for the femoral head. The estimated pharmacokinetic parameters (Kep and Ve from two-compartment model) and semi-quantitative parameters including peak enhancement, time-to-peak (TTP), and contrast washout were evaluated. The epiphyseal ADC values of the ischemic hip decreased immediately (1 hour) after embolization. However, they increased rapidly at 1 week after embolization and remained elevated until 4 weeks after embolization. Perfusion MRI of ischemic hips showed decreased epiphyseal perfusion with decreased Kep immediately after embolization. Signal intensity-time curves showed delayed TTP with limited contrast washout immediately post-embolization. At 1-2 weeks after embolization, spontaneous reperfusion was observed in ischemic epiphyses. The change of ADC (p = 0.043) and Kep (p = 0.043) were significantly different between immediate (1 hour) after embolization and 1 week post-embolization. Diffusion MRI and pharmacokinetic model obtained from the DCE-MRI are useful in depicting early changes of perfusion and tissue damage using the model of femoral head ischemia in skeletally immature piglets.

  2. Impact assessment of abiotic resources in LCA: quantitative comparison of selected characterization models.

    Science.gov (United States)

    Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F

    2014-10-07

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA.

  3. A quantitative model for using acridine orange as a transmembrane pH gradient probe.

    Science.gov (United States)

    Clerc, S; Barenholz, Y

    1998-05-15

    Monitoring the acidification of the internal space of membrane vesicles by proton pumps can be achieved easily with optical probes. Transmembrane pH gradients cause a blue-shift in the absorbance spectrum and the quenching of the fluorescence of the cationic dye acridine orange. It has been postulated that these changes are caused by accumulation and aggregation of the dye inside the vesicles. We tested this hypothesis using liposomes with transmembrane concentration gradients of ammonium sulfate as model system. Fluorescence intensity of acridine orange solutions incubated with liposomes was affected by magnitude of the gradient, volume trapped by vesicles, and temperature. These experimental data were compared to a theoretical model describing the accumulation of acridine orange monomers in the vesicles according to the inside-to-outside ratio of proton concentrations, and the intravesicular formation of sandwich-like piles of acridine orange cations. This theoretical model predicted quantitatively the relationship between the transmembrane pH gradients and spectral changes of acridine orange. Therefore, adequate characterization of aggregation of dye in the lumen of biological vesicles provides the theoretical basis for using acridine orange as an optical probe to quantify transmembrane pH gradients.

  4. NetLand: quantitative modeling and visualization of Waddington's epigenetic landscape using probabilistic potential.

    Science.gov (United States)

    Guo, Jing; Lin, Feng; Zhang, Xiaomeng; Tanavde, Vivek; Zheng, Jie

    2017-05-15

    Waddington's epigenetic landscape is a powerful metaphor for cellular dynamics driven by gene regulatory networks (GRNs). Its quantitative modeling and visualization, however, remains a challenge, especially when there are more than two genes in the network. A software tool for Waddington's landscape has not been available in the literature. We present NetLand, an open-source software tool for modeling and simulating the kinetic dynamics of GRNs, and visualizing the corresponding Waddington's epigenetic landscape in three dimensions without restriction on the number of genes in a GRN. With an interactive and graphical user interface, NetLand can facilitate the knowledge discovery and experimental design in the study of cell fate regulation (e.g. stem cell differentiation and reprogramming). NetLand can run under operating systems including Windows, Linux and OS X. The executive files and source code of NetLand as well as a user manual, example models etc. can be downloaded from http://netland-ntu.github.io/NetLand/ . zhengjie@ntu.edu.sg. Supplementary data are available at Bioinformatics online.

  5. Quantitative analysis of surface deformation and ductile flow in complex analogue geodynamic models based on PIV method.

    Science.gov (United States)

    Krýza, Ondřej; Lexa, Ondrej; Závada, Prokop; Schulmann, Karel; Gapais, Denis; Cosgrove, John

    2017-04-01

    Recently, a PIV (particle image velocimetry) analysis method is optical method abundantly used in many technical branches where material flow visualization and quantification is important. Typical examples are studies of liquid flow through complex channel system, gas spreading or combustion problematics. In our current research we used this method for investigation of two types of complex analogue geodynamic and tectonic experiments. First class of experiments is aimed to model large-scale oroclinal buckling as an analogue of late Paleozoic to early Mesozoic evolution of Central Asian Orogenic Belt (CAOB) resulting from nortward drift of the North-China craton towards the Siberian craton. Here we studied relationship between lower crustal and lithospheric mantle flows and upper crustal deformation respectively. A second class of experiments is focused to more general study of a lower crustal flow in indentation systems that represent a major component of some large hot orogens (e.g. Bohemian massif). The most of simulations in both cases shows a strong dependency of a brittle structures shape, that are situated in upper crust, on folding style of a middle and lower ductile layers which is influenced by rheological, geometrical and thermal conditions of different parts across shortened domain. The purpose of PIV application is to quantify material redistribution in critical domains of the model. The derivation of flow direction and calculation of strain-rate and total displacement field in analogue experiments is generally difficult and time-expensive or often performed only on a base of visual evaluations. PIV method operates with set of images, where small tracer particles are seeded within modeled domain and are assumed to faithfully follow the material flow. On base of pixel coordinates estimation the material displacement field, velocity field, strain-rate, vorticity, tortuosity etc. are calculated. In our experiments we used velocity field divergence to

  6. Model validation for karst flow using sandbox experiments

    Science.gov (United States)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  7. Quantitative models of hydrothermal fluid-mineral reaction: The Ischia case

    Science.gov (United States)

    Di Napoli, Rossella; Federico, Cinzia; Aiuppa, Alessandro; D'Antonio, Massimo; Valenza, Mariano

    2013-03-01

    The intricate pathways of fluid-mineral reactions occurring underneath active hydrothermal systems are explored in this study by applying reaction path modelling to the Ischia case study. Ischia Island, in Southern Italy, hosts a well-developed and structurally complex hydrothermal system which, because of its heterogeneity in chemical and physical properties, is an ideal test sites for evaluating potentialities/limitations of quantitative geochemical models of hydrothermal reactions. We used the EQ3/6 software package, version 7.2b, to model reaction of infiltrating waters (mixtures of meteoric water and seawater in variable proportions) with Ischia's reservoir rocks (the Mount Epomeo Green Tuff units; MEGT). The mineral assemblage and composition of such MEGT units were initially characterised by ad hoc designed optical microscopy and electron microprobe analysis, showing that phenocrysts (dominantly alkali-feldspars and plagioclase) are set in a pervasively altered (with abundant clay minerals and zeolites) groundmass. Reaction of infiltrating waters with MEGT minerals was simulated over a range of realistic (for Ischia) temperatures (95-260 °C) and CO2 fugacities (10-0.2 to 100.5) bar. During the model runs, a set of secondary minerals (selected based on independent information from alteration minerals' studies) was allowed to precipitate from model solutions, when saturation was achieved. The compositional evolution of model solutions obtained in the 95-260 °C runs were finally compared with compositions of Ischia's thermal groundwaters, demonstrating an overall agreement. Our simulations, in particular, well reproduce the Mg-depleting maturation path of hydrothermal solutions, and have end-of-run model solutions whose Na-K-Mg compositions well reflect attainment of full-equilibrium conditions at run temperature. High-temperature (180-260 °C) model runs are those best matching the Na-K-Mg compositions of Ischia's most chemically mature water samples

  8. Enriching the Student Experience Through a Collaborative Cultural Learning Model.

    Science.gov (United States)

    McInally, Wendy; Metcalfe, Sharon; Garner, Bonnie

    2015-01-01

    This article provides a knowledge and understanding of an international, collaborative, cultural learning model for students from the United States and Scotland. Internationalizing the student experience has been instrumental for student learning for the past eight years. Both countries have developed programs that have enriched and enhanced the overall student learning experience, mainly through the sharing of evidence-based care in both hospital and community settings. Student learning is at the heart of this international model, and through practice learning, leadership, and reflective practice, student immersion in global health care and practice is immense. Moving forward, we are seeking new opportunities to explore learning partnerships to provide this collaborative cultural learning experience.

  9. Quantitative seafloor characterization using angular backscatter data of the multi-beam echo-sounding system - Use of models and model free techniques

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    For quantitative seafloor roughness characterization and classification using multi-beam processed backscatter data, a good correlation is indicated among the power law parameters (composite roughness model) and hybrid ANN architecture results...

  10. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  11. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    Science.gov (United States)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  12. Serious overestimation in quantitative PCR by circular (supercoiled plasmid standard: microalgal pcna as the model gene.

    Directory of Open Access Journals (Sweden)

    Yubo Hou

    Full Text Available Quantitative real-time PCR (qPCR has become a gold standard for the quantification of nucleic acids and microorganism abundances, in which plasmid DNA carrying the target genes are most commonly used as the standard. A recent study showed that supercoiled circular confirmation of DNA appeared to suppress PCR amplification. However, to what extent to which different structural types of DNA (circular versus linear used as the standard may affect the quantification accuracy has not been evaluated. In this study, we quantitatively compared qPCR accuracies based on circular plasmid (mostly in supercoiled form and linear DNA standards (linearized plasmid DNA or PCR amplicons, using proliferating cell nuclear gene (pcna, the ubiquitous eukaryotic gene, in five marine microalgae as a model gene. We observed that PCR using circular plasmids as template gave 2.65-4.38 more of the threshold cycle number than did equimolar linear standards. While the documented genome sequence of the diatom Thalassiosira pseudonana shows a single copy of pcna, qPCR using the circular plasmid as standard yielded an estimate of 7.77 copies of pcna per genome whereas that using the linear standard gave 1.02 copies per genome. We conclude that circular plasmid DNA is unsuitable as a standard, and linear DNA should be used instead, in absolute qPCR. The serious overestimation by the circular plasmid standard is likely due to the undetected lower efficiency of its amplification in the early stage of PCR when the supercoiled plasmid is the dominant template.

  13. QSTR modeling for qualitative and quantitative toxicity predictions of diverse chemical pesticides in honey bee for regulatory purposes.

    Science.gov (United States)

    Singh, Kunwar P; Gupta, Shikha; Basant, Nikita; Mohan, Dinesh

    2014-09-15

    Pesticides are designed toxic chemicals for specific purposes and can harm nontarget species as well. The honey bee is considered a nontarget test species for toxicity evaluation of chemicals. Global QSTR (quantitative structure-toxicity relationship) models were established for qualitative and quantitative toxicity prediction of pesticides in honey bee (Apis mellifera) based on the experimental toxicity data of 237 structurally diverse pesticides. Structural diversity of the chemical pesticides and nonlinear dependence in the toxicity data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) QSTR models were constructed for classification (two and four categories) and function optimization problems using the toxicity end point in honey bees. The predictive power of the QSTR models was tested through rigorous validation performed using the internal and external procedures employing a wide series of statistical checks. In complete data, the PNN-QSTR model rendered a classification accuracy of 96.62% (two-category) and 95.57% (four-category), while the GRNN-QSTR model yielded a correlation (R(2)) of 0.841 between the measured and predicted toxicity values with a mean squared error (MSE) of 0.22. The results suggest the appropriateness of the developed QSTR models for reliably predicting qualitative and quantitative toxicities of pesticides in honey bee. Both the PNN and GRNN based QSTR models constructed here can be useful tools in predicting the qualitative and quantitative toxicities of the new chemical pesticides for regulatory purposes.

  14. Teaching Structures with Models: Experiences from Chile and the Netherlands

    NARCIS (Netherlands)

    Morales Beltran, M.G.; Borgart, A.

    2012-01-01

    This paper states the importance of using scaled models for the teaching of structures in the curricula of Architecture and Structural Engineering studies. Based on 10 years’ experience working with models for different purposes, with a variety of materials and constructions methods, the authors wil

  15. Modelling the non-gravitational acceleration during Cassini's gravitation experiments

    CERN Document Server

    Bertolami, O; Gil, P J S; Páramos, J

    2014-01-01

    In this work we present a computation of the thermally generated acceleration on the Cassini probe during its solar conjunction experiment, obtained from a model of the spacecraft. We use a point-like source method to build a thermal model of the vehicle and find that the results are in close agreement with the estimates of this effect performed through Doppler data analysis.

  16. Teaching Structures with Models: Experiences from Chile and the Netherlands

    NARCIS (Netherlands)

    Morales Beltran, M.G.; Borgart, A.

    2012-01-01

    This paper states the importance of using scaled models for the teaching of structures in the curricula of Architecture and Structural Engineering studies. Based on 10 years’ experience working with models for different purposes, with a variety of materials and constructions methods, the authors wil

  17. Teaching Structures with Models: Experiences from Chile and the Netherlands

    NARCIS (Netherlands)

    Morales Beltran, M.G.; Borgart, A.

    2012-01-01

    This paper states the importance of using scaled models for the teaching of structures in the curricula of Architecture and Structural Engineering studies. Based on 10 years’ experience working with models for different purposes, with a variety of materials and constructions methods, the authors

  18. Human strategic reasoning in dynamic games: Experiments, logics, cognitive models

    NARCIS (Netherlands)

    Ghosh, Sujata; Halder, Tamoghna; Sharma, Khyati; Verbrugge, Rineke

    2015-01-01

    © Springer-Verlag Berlin Heidelberg 2015.This article provides a three-way interaction between experiments, logic and cognitive modelling so as to bring out a shared perspective among these diverse areas, aiming towards better understanding and better modelling of human strategic reasoning in

  19. Model experiments related to outdoor propagation over an earth berm

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1994-01-01

    A series of scale model experiments related to outdoor propagation over an earth berm is described. The measurements are performed with a triggered spark source. The results are compared with data from an existing calculation model based upon uniform diffraction theory. Comparisons are made...

  20. Human strategic reasoning in dynamic games: Experiments, logics, cognitive models

    NARCIS (Netherlands)

    Ghosh, Sujata; Halder, Tamoghna; Sharma, Khyati; Verbrugge, Rineke

    2015-01-01

    © Springer-Verlag Berlin Heidelberg 2015.This article provides a three-way interaction between experiments, logic and cognitive modelling so as to bring out a shared perspective among these diverse areas, aiming towards better understanding and better modelling of human strategic reasoning in dynami

  1. Development of Experience-based Learning about Atmospheric Environment with Quantitative Viewpoint aimed at Education for Sustainable Development

    Science.gov (United States)

    Saitoh, Y.; Tago, H.

    2014-12-01

    The word "ESD (Education for Sustainable Development)" has spread over the world in UN decade (2005 - 2014), and the momentum of the educational innovation aimed at ESD also has grown in the world. Especially, environmental educations recognized as one of the most important ESD have developed in many countries including Japan, but most of those are still mainly experiences in nature. Those could develop "Respect for Environment" of the educational targets of ESD, however we would have to take a further step in order to enhance "Ability of analysis and thinking logically about the environment" which are also targets of ESD.Thus, we developed experienced-learning program about atmospheric particulate matter (PM2.5), for understanding the state of the environment objectively based on quantitative data. PM2.5 is known for harmful, and various human activities are considered a source of it, therefore environmental standards for PM2.5 have been established in many countries. This program was tested on junior high school students of 13 - 15 years old, and the questionnaire survey also was conducted to them before and after the program for evaluating educational effects. Students experienced to measure the concentration of PM2.5 at 5 places around their school in a practical manner. The measured concentration of PM2.5 ranged from 19 to 41 μg/m3/day, that value at the most crowded roadside exceeded Japan's environmental standard (35 μg/m3/day). Many of them expressed "Value of PM2.5 is high" in their individual discussion notes. As a consistent with that, the answer "Don't know" to the question "What do you think about the state of the air?" markedly decreased after the program, on the other hand the answer "Pollution" to the same question increased instead. From above-mentioned, it was considered that they could judge the state of the air objectively. Consequently, the questionnaire result "Concern about Air Pollution" increased significantly after the program compared

  2. Establishment of Quantitative Severity Evaluation Model for Spinal Cord Injury by Metabolomic Fingerprinting

    Science.gov (United States)

    Yang, Hao; Cohen, Mitchell Jay; Chen, Wei; Sun, Ming-Wei; Lu, Charles Damien

    2014-01-01

    Spinal cord injury (SCI) is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR) screening, we identified 15 metabolites that made up an “Eigen-metabolome” capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD–NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use. PMID:24727691

  3. Physiologically Based Pharmacokinetic Modeling Framework for Quantitative Prediction of an Herb–Drug Interaction

    Science.gov (United States)

    Brantley, S J; Gufford, B T; Dua, R; Fediuk, D J; Graf, T N; Scarlett, Y V; Frederick, K S; Fisher, M B; Oberlies, N H; Paine, M F

    2014-01-01

    Herb–drug interaction predictions remain challenging. Physiologically based pharmacokinetic (PBPK) modeling was used to improve prediction accuracy of potential herb–drug interactions using the semipurified milk thistle preparation, silibinin, as an exemplar herbal product. Interactions between silibinin constituents and the probe substrates warfarin (CYP2C9) and midazolam (CYP3A) were simulated. A low silibinin dose (160 mg/day × 14 days) was predicted to increase midazolam area under the curve (AUC) by 1%, which was corroborated with external data; a higher dose (1,650 mg/day × 7 days) was predicted to increase midazolam and (S)-warfarin AUC by 5% and 4%, respectively. A proof-of-concept clinical study confirmed minimal interaction between high-dose silibinin and both midazolam and (S)-warfarin (9 and 13% increase in AUC, respectively). Unexpectedly, (R)-warfarin AUC decreased (by 15%), but this is unlikely to be clinically important. Application of this PBPK modeling framework to other herb–drug interactions could facilitate development of guidelines for quantitative prediction of clinically relevant interactions. PMID:24670388

  4. Gas chromatographic quantitative analysis of methanol in wine: operative conditions, optimization and calibration model choice.

    Science.gov (United States)

    Caruso, Rosario; Gambino, Grazia Laura; Scordino, Monica; Sabatino, Leonardo; Traulo, Pasqualino; Gagliano, Giacomo

    2011-12-01

    The influence of the wine distillation process on methanol content has been determined by quantitative analysis using gas chromatographic flame ionization (GC-FID) detection. A comparative study between direct injection of diluted wine and injection of distilled wine was performed. The distillation process does not affect methanol quantification in wines in proportions higher than 10%. While quantification performed on distilled samples gives more reliable results, a screening method for wine injection after a 1:5 water dilution could be employed. The proposed technique was found to be a compromise between the time consuming distillation process and direct wine injection. In the studied calibration range, the stability of the volatile compounds in the reference solution is concentration-dependent. The stability is higher in the less concentrated reference solution. To shorten the operation time, a stronger temperature ramp and carrier flow rate was employed. With these conditions, helium consumption and column thermal stress were increased. However, detection limits, calibration limits, and analytical method performances are not affected substantially by changing from normal to forced GC conditions. Statistical data evaluation were made using both ordinary (OLS) and bivariate least squares (BLS) calibration models. Further confirmation was obtained that limit of detection (LOD) values, calculated according to the 3sigma approach, are lower than the respective Hubaux-Vos (H-V) calculation method. H-V LOD depends upon background noise, calibration parameters and the number of reference standard solutions employed in producing the calibration curve. These remarks are confirmed by both calibration models used.

  5. CORAL: quantitative structure-activity relationship models for estimating toxicity of organic compounds in rats.

    Science.gov (United States)

    Toropova, A P; Toropov, A A; Benfenati, E; Gini, G; Leszczynska, D; Leszczynski, J

    2011-09-01

    For six random splits, one-variable models of rat toxicity (minus decimal logarithm of the 50% lethal dose [pLD50], oral exposure) have been calculated with CORAL software (http://www.insilico.eu/coral/). The total number of considered compounds is 689. New additional global attributes of the simplified molecular input line entry system (SMILES) have been examined for improvement of the optimal SMILES-based descriptors. These global SMILES attributes are representing the presence of some chemical elements and different kinds of chemical bonds (double, triple, and stereochemical). The "classic" scheme of building up quantitative structure-property/activity relationships and the balance of correlations (BC) with the ideal slopes were compared. For all six random splits, best prediction takes place if the aforementioned BC along with the global SMILES attributes are included in the modeling process. The average statistical characteristics for the external test set are the following: n = 119 ± 6.4, R(2) = 0.7371 ± 0.013, and root mean square error = 0.360 ± 0.037. Copyright © 2011 Wiley Periodicals, Inc.

  6. Modeling and Quantitative Analysis of GNSS/INS Deep Integration Tracking Loops in High Dynamics

    Directory of Open Access Journals (Sweden)

    Yalong Ban

    2017-09-01

    Full Text Available To meet the requirements of global navigation satellite systems (GNSS precision applications in high dynamics, this paper describes a study on the carrier phase tracking technology of the GNSS/inertial navigation system (INS deep integration system. The error propagation models of INS-aided carrier tracking loops are modeled in detail in high dynamics. Additionally, quantitative analysis of carrier phase tracking errors caused by INS error sources is carried out under the uniform high dynamic linear acceleration motion of 100 g. Results show that the major INS error sources, affecting the carrier phase tracking accuracy in high dynamics, include initial attitude errors, accelerometer scale factors, gyro noise and gyro g-sensitivity errors. The initial attitude errors are usually combined with the receiver acceleration to impact the tracking loop performance, which can easily cause the failure of carrier phase tracking. The main INS error factors vary with the vehicle motion direction and the relative position of the receiver and the satellites. The analysis results also indicate that the low-cost micro-electro mechanical system (MEMS inertial measurement units (IMU has the ability to maintain GNSS carrier phase tracking in high dynamics.

  7. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors.

    Science.gov (United States)

    Shuryak, Igor; Dadachova, Ekaterina

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental

  8. Hydrologic connectivity: Quantitative assessments of hydrologic-enforced drainage structures in an elevation model

    Science.gov (United States)

    Poppenga, Sandra; Worstell, Bruce B.

    2016-01-01

    Elevation data derived from light detection and ranging present challenges for hydrologic modeling as the elevation surface includes bridge decks and elevated road features overlaying culvert drainage structures. In reality, water is carried through these structures; however, in the elevation surface these features impede modeled overland surface flow. Thus, a hydrologically-enforced elevation surface is needed for hydrodynamic modeling. In the Delaware River Basin, hydrologic-enforcement techniques were used to modify elevations to simulate how constructed drainage structures allow overland surface flow. By calculating residuals between unfilled and filled elevation surfaces, artificially pooled depressions that formed upstream of constructed drainage structure features were defined, and elevation values were adjusted by generating transects at the location of the drainage structures. An assessment of each hydrologically-enforced drainage structure was conducted using field-surveyed culvert and bridge coordinates obtained from numerous public agencies, but it was discovered the disparate drainage structure datasets were not comprehensive enough to assess all remotely located depressions in need of hydrologic-enforcement. Alternatively, orthoimagery was interpreted to define drainage structures near each depression, and these locations were used as reference points for a quantitative hydrologic-enforcement assessment. The orthoimagery-interpreted reference points resulted in a larger corresponding sample size than the assessment between hydrologic-enforced transects and field-surveyed data. This assessment demonstrates the viability of rules-based hydrologic-enforcement that is needed to achieve hydrologic connectivity, which is valuable for hydrodynamic models in sensitive coastal regions. Hydrologic-enforced elevation data are also essential for merging with topographic/bathymetric elevation data that extend over vulnerable urbanized areas and dynamic coastal

  9. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  10. A bivariate quantitative genetic model for a threshold trait and a survival trait

    Directory of Open Access Journals (Sweden)

    Damgaard Lars

    2006-11-01

    Full Text Available Abstract Many of the functional traits considered in animal breeding can be analyzed as threshold traits or survival traits with examples including disease traits, conformation scores, calving difficulty and longevity. In this paper we derive and implement a bivariate quantitative genetic model for a threshold character and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted in which model parameters were augmented with unobserved liabilities associated with the threshold trait. The fully conditional posterior distributions associated with parameters of the threshold trait reduced to well known distributions. For the survival trait the two baseline Weibull parameters were updated jointly by a Metropolis-Hastings step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. The Gibbs sampler was tested in a simulation study and illustrated in a joint analysis of calving difficulty and longevity of dairy cattle. The simulation study showed that the estimated marginal posterior distributions covered well and placed high density to the true values used in the simulation of data. The data analysis of calving difficulty and longevity showed that genetic variation exists for both traits. The additive genetic correlation was moderately favorable with marginal posterior mean equal to 0.37 and 95% central posterior credibility interval ranging between 0.11 and 0.61. Therefore, this study suggests that selection for improving one of the two traits will be beneficial for the other trait as well.

  11. A bivariate quantitative genetic model for a threshold trait and a survival trait.

    Science.gov (United States)

    Damgaard, Lars Holm; Korsgaard, Inge Riis

    2006-01-01

    Many of the functional traits considered in animal breeding can be analyzed as threshold traits or survival traits with examples including disease traits, conformation scores, calving difficulty and longevity. In this paper we derive and implement a bivariate quantitative genetic model for a threshold character and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted in which model parameters were augmented with unobserved liabilities associated with the threshold trait. The fully conditional posterior distributions associated with parameters of the threshold trait reduced to well known distributions. For the survival trait the two baseline Weibull parameters were updated jointly by a Metropolis-Hastings step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. The Gibbs sampler was tested in a simulation study and illustrated in a joint analysis of calving difficulty and longevity of dairy cattle. The simulation study showed that the estimated marginal posterior distributions covered well and placed high density to the true values used in the simulation of data. The data analysis of calving difficulty and longevity showed that genetic variation exists for both traits. The additive genetic correlation was moderately favorable with marginal posterior mean equal to 0.37 and 95% central posterior credibility interval ranging between 0.11 and 0.61. Therefore, this study suggests that selection for improving one of the two traits will be beneficial for the other trait as well.

  12. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    Science.gov (United States)

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  13. Using molecular markers to map multiple quantitative trait loci: models for backcross, recombinant inbred, and doubled haploid progeny.

    Science.gov (United States)

    Knapp, S J

    1991-03-01

    To maximize parameter estimation efficiency and statistical power and to estimate epistasis, the parameters of multiple quantitative trait loci (QTLs) must be simultaneously estimated. If multiple QTL affect a trait, then estimates of means of QTL genotypes from individual locus models are statistically biased. In this paper, I describe methods for estimating means of QTL genotypes and recombination frequencies between marker and quantitative trait loci using multilocus backcross, doubled haploid, recombinant inbred, and testcross progeny models. Expected values of marker genotype means were defined using no double or multiple crossover frequencies and flanking markers for linked and unlinked quantitative trait loci. The expected values for a particular model comprise a system of nonlinear equations that can be solved using an interative algorithm, e.g., the Gauss-Newton algorithm. The solutions are maximum likelihood estimates when the errors are normally distributed. A linear model for estimating the parameters of unlinked quantitative trait loci was found by transforming the nonlinear model. Recombination frequency estimators were defined using this linear model. Certain means of linked QTLs are less efficiently estimated than means of unlinked QTLs.

  14. Enthalpy benchmark experiments for numerical ice sheet models

    Directory of Open Access Journals (Sweden)

    T. Kleiner

    2014-06-01

    Full Text Available We present benchmark experiments to test the implementation of enthalpy and the corresponding boundary conditions in numerical ice sheet models. The first experiment tests particularly the functionality of the boundary condition scheme and the basal melt rate calculation during transient simulations. The second experiment addresses the steady-state enthalpy profile and the resulting position of the cold–temperate transition surface (CTS. For both experiments we assume ice flow in a parallel-sided slab decoupled from the thermal regime. Since we impose several assumptions on the experiment design, analytical solutions can be formulated for the proposed numerical experiments. We compare simulation results achieved by three different ice flow-models with these analytical solutions. The models agree well to the analytical solutions, if the change in conductivity between cold and temperate ice is properly considered in the model. In particular, the enthalpy gradient at the cold side of the CTS vanishes in the limit of vanishing conductivity in the temperate ice part as required from the physical jump conditions at the CTS.

  15. The use of electromagnetic induction methods for establishing quantitative permafrost models in West Greenland

    Science.gov (United States)

    Ingeman-Nielsen, Thomas; Brandt, Inooraq

    2010-05-01

    permafrozen sediments is generally not available in Greenland, and mobilization costs are therefore considerable thus limiting the use of geotechnical borings to larger infrastructure and construction projects. To overcome these problems, we have tested the use of shallow Transient ElectroMagnetic (TEM) measurements, to provide constraints in terms of depth to and resistivity of the conductive saline layer. We have tested such a setup at two field sites in the Ilulissat area (mid-west Greenland), one with available borehole information (site A), the second without (site C). VES and TEM soundings were collected at each site and the respective data sets subsequently inverted using a mutually constrained inversion scheme. At site A, the TEM measurements (20x20m square loop, in-loop configuration) show substantial and repeatable negative amplitude segments, and therefore it has not presently been possible to provide a quantitative interpretation for this location. Negative segments are typically a sign of Induced Polarization or cultural effects. Forward modeling based on inversion of the VES data constrained with borehole information has indicated that IP effects could indeed be the cause of the observed anomaly, although such effects are not normally expected in permafrost or saline deposits. Data from site C has shown that jointly inverting the TEM and VES measurements does provide well determined estimates for all layer parameters except the thickness of the active layer and resistivity of the bedrock. The active layer thickness may be easily probed to provide prior information on this parameter, and the bedrock resistivity is of limited interest in technical applications. Although no confirming borehole information is available at this site, these results indicate that joint or mutually constrained inversion of TEM and VES data is feasible and that this setup may provide a fast and cost effective method for establishing quantitative interpretations of permafrost structure in

  16. Absolute quantitation of myocardial blood flow with {sup 201}Tl and dynamic SPECT in canine: optimisation and validation of kinetic modelling

    Energy Technology Data Exchange (ETDEWEB)

    Iida, Hidehiro; Kim, Kyeong-Min; Nakazawa, Mayumi; Sohlberg, Antti; Zeniya, Tsutomu; Hayashi, Takuya; Watabe, Hiroshi [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita City, Osaka (Japan); Eberl, Stefan [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita City, Osaka (Japan); Royal Prince Alfred Hospital, PET and Nuclear Medicine Department, Camperdown, NSW (Australia); Tamura, Yoshikazu [Akita Kumiai General Hospital, Department of Cardiology, Akita City (Japan); Ono, Yukihiko [Akita Research Institute of Brain, Akita City (Japan)

    2008-05-15

    {sup 201}Tl has been extensively used for myocardial perfusion and viability assessment. Unlike {sup 99m}Tc-labelled agents, such as {sup 99m}Tc-sestamibi and {sup 99m}Tc-tetrofosmine, the regional concentration of {sup 201}Tl varies with time. This study is intended to validate a kinetic modelling approach for in vivo quantitative estimation of regional myocardial blood flow (MBF) and volume of distribution of {sup 201}Tl using dynamic SPECT. Dynamic SPECT was carried out on 20 normal canines after the intravenous administration of {sup 201}Tl using a commercial SPECT system. Seven animals were studied at rest, nine during adenosine infusion, and four after beta-blocker administration. Quantitative images were reconstructed with a previously validated technique, employing OS-EM with attenuation-correction, and transmission-dependent convolution subtraction scatter correction. Measured regional time-activity curves in myocardial segments were fitted to two- and three-compartment models. Regional MBF was defined as the influx rate constant (K{sub 1}) with corrections for the partial volume effect, haematocrit and limited first-pass extraction fraction, and was compared with that determined from radio-labelled microspheres experiments. Regional time-activity curves responded well to pharmacological stress. Quantitative MBF values were higher with adenosine and decreased after beta-blocker compared to a resting condition. MBFs obtained with SPECT (MBF{sub SPECT}) correlated well with the MBF values obtained by the radio-labelled microspheres (MBF{sub MS}) (MBF{sub SPECT} = -0.067 + 1.042 x MBF{sub MS}, p < 0.001). The three-compartment model provided better fit than the two-compartment model, but the difference in MBF values between the two methods was small and could be accounted for with a simple linear regression. Absolute quantitation of regional MBF, for a wide physiological flow range, appears to be feasible using {sup 201}Tl and dynamic SPECT. (orig.)

  17. Numerical Simulation and Cold Modeling experiments on Centrifugal Casting

    Science.gov (United States)

    Keerthiprasad, Kestur Sadashivaiah; Murali, Mysore Seetharam; Mukunda, Pudukottah Gopaliengar; Majumdar, Sekhar

    2011-02-01

    In a centrifugal casting process, the fluid flow eventually determines the quality and characteristics of the final product. It is difficult to study the fluid behavior here because of the opaque nature of melt and mold. In the current investigation, numerical simulations of the flow field and visualization experiments on cold models have been carried out for a centrifugal casting system using horizontal molds and fluids of different viscosities to study the effect of different process variables on the flow pattern. The effects of the thickness of the cylindrical fluid annulus formed inside the mold and the effects of fluid viscosity, diameter, and rotational speed of the mold on the hollow fluid cylinder formation process have been investigated. The numerical simulation results are compared with corresponding data obtained from the cold modeling experiments. The influence of rotational speed in a real-life centrifugal casting system has also been studied using an aluminum-silicon alloy. Cylinders of different thicknesses are cast at different rotational speeds, and the flow patterns observed visually in the actual castings are found to be similar to those recorded in the corresponding cold modeling experiments. Reasonable agreement is observed between the results of numerical simulation and the results of cold modeling experiments with different fluids. The visualization study on the hollow cylinders produced in an actual centrifugal casting process also confirm the conclusions arrived at from the cold modeling experiments and numerical simulation in a qualitative sense.

  18. Designing experiments to discriminate families of logic models

    Directory of Open Access Journals (Sweden)

    Santiago eVidela

    2015-09-01

    Full Text Available Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice the number and type of allowed perturbations is not exhaustive. Moreover, experimental data is unavoidably subject to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty.In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments we obtained logical models with fitness scores (mean square error 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  19. A quantitative exposure model simulating human norovirus transmission during preparation of deli sandwiches.

    Science.gov (United States)

    Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke

    2015-03-02

    Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The

  20. Quantitative assessment of uncertainties for a model of tropospheric ethene oxidation using the European Photoreactor (EUPHORE)

    Science.gov (United States)

    Zádor, Judit; Wagner, Volker; Wirtz, Klaus; Pilling, Michael J.

    Methods of uncertainty analysis were used for comparison of the Master Chemical Mechanism version 3 (MCMv3) with measurements made in the European Photoreactor (EUPHORE) at Valencia (Spain) to investigate model-measurement discrepancies and to obtain information on the importance of wall effects. Two EUPHORE smog chamber measurements of ethene oxidation, under high and low NO x conditions were analysed by the following methods: (i) local uncertainty analysis, (ii) the global screening method of Morris and (iii) Monte Carlo (MC) analysis with Latin hypercube sampling. For both experiments, ozone (by 25% and 30%, respectively) and formaldehyde (by 34% and 40%, respectively) are significantly over-predicted by the model calculations, while the disagreement for other species is less substantial. According to the local uncertainty analysis and the Morris method, the most important contributor to ozone uncertainty under low NO x conditions is HOCH 2CH 2O 2+NO→HOCH 2CH 2O+NO 2, while under high NO x conditions OH+NO 2→HNO 3 is the main contributor. The MC simulations give an estimate of the 2σ uncertainty for ozone as ˜20% in both scenarios at the end of the experiment. The results suggest systematic disagreement between measurements and model calculations, although the origin of this is not clear. It seems that chamber effects alone are not responsible for the observed discrepancies.