WorldWideScience

Sample records for existing models quantitatively

  1. A selection of proper regression models from their existing set: qualitative, quantitative, graphical and logical criteria

    Directory of Open Access Journals (Sweden)

    Benková Marta

    1999-09-01

    Full Text Available The contribution presents an approach to the solution of the problem of processing experimental data of various origin using methods of regression and correlation analysis for two- and threedimensional relations between variables. It concentrates on calculation procedures, based on the lastsquare method and other possibilities of obtaining continual information about the quality of processed data as well as of resultant regression models

  2. Comment on “Can existing models quantitatively describe the mixing behavior of acetone with water” [J. Chem. Phys. 130, 124516 (2009)

    Science.gov (United States)

    Kang, Myungshim; Perera, Aurelien; Smith, Paul E.

    2009-01-01

    A recent publication indicated that simulations of acetone-water mixtures using the KBFF model for acetone indicate demixing at mole fractions less than 0.28 of acetone, in disagreement with experiment and two previously published studies. Here, we indicate some inconsistancies in the current study which could help to explain these differences. PMID:20568888

  3. A quantitative analysis of the marked asymmetry existing between ...

    African Journals Online (AJOL)

    The pelvic girdle musculature of eleven of the eighteen southern African Mabuya species described by Branch (1988) was examined, using differences in mass to emphasize the marked asymmetry existing between partners of certain muscle pairs. The lighter muscles expressed as indices of their heavier partners gave a ...

  4. Modeling Truth Existence in Truth Discovery.

    Science.gov (United States)

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-08-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods.

  5. University Students' Research Orientations: Do Negative Attitudes Exist toward Quantitative Methods?

    Science.gov (United States)

    Murtonen, Mari

    2005-01-01

    This paper examines university social science and education students' views of research methodology, especially asking whether a negative research orientation towards quantitative methods exists. Finnish (n = 196) and US (n = 122) students answered a questionnaire concerning their views on quantitative, qualitative, empirical, and theoretical…

  6. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  7. The pastor as model for peaceful existence

    Directory of Open Access Journals (Sweden)

    Terence Cooke

    2011-06-01

    Full Text Available Many people are disillusioned in the democratic South Africa. That is because they went out from the assumption that with the dawn of democracy, violence would disappear. Unfortunately this did not happen. As with most things in life it is not an either � or, but a both � and scenario. In fact, violence is part of the democratic system. Real peace between men and powers can only be the peace of God, the peace which alone heals all disorder. The peace of the world is at best peaceful coexistence, not peace.In South Africa we have a negotiated agreement to peaceful coexistence, and sometimes, for example, after the miracle of the 1994 election and the euphoria of the World Cups of 1995, 2007 and 2010, we may even think we have achieved real peace. It is indeed in these times of euphoria that the people of South Africa may be tempted to lower our aim and settle for second best thinking that we have arrived.Model is used not in the sense of the pastor being an example of a peaceful existence to be followed. It is rather used in the sense that a pastor in his or her professional capacity has the knowledge of the meaning of the term �peaceful existence� and also the hermeneutic competency to apply that knowledge in concrete situations. This opens the exiting possibility that pastors can become travel companions on the road to real peace.The different aspects of being a pastor, office bearer, professional and person, each contribute to the pastor being a model for peace. It must be emphasised that the different aspects always work together as a unity and the strength of the pastor as a model for a peaceful existence is in the simultaneous application of these aspects in the context in which the pastor lives.

  8. Quantitative reactive modeling and verification.

    Science.gov (United States)

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  9. Existing Model Metrics and Relations to Model Quality

    OpenAIRE

    Mohagheghi, Parastoo; Dehlen, Vegard

    2009-01-01

    This paper presents quality goals for models and provides a state-of-the-art analysis regarding model metrics. While model-based software development often requires assessing the quality of models at different abstraction and precision levels and developed for multiple purposes, existing work on model metrics do not reflect this need. Model size metrics are descriptive and may be used for comparing models but their relation to model quality is not welldefined. Code metrics are proposed to be ...

  10. Percolation Model for the Existence of a Mitochondrial Eve

    CERN Document Server

    Neves, A G M

    2005-01-01

    We look at the process of inheritance of mitochondrial DNA as a percolation model on trees equivalent to the Galton-Watson process. The model is exactly solvable for its percolation threshold $p_c$ and percolation probability critical exponent. In the approximation of small percolation probability, and assuming limited progeny number, we are also able to find the maximum and minimum percolation probabilities over all probability distributions for the progeny number constrained to a given $p_c$. As a consequence, we can relate existence of a mitochondrial Eve to quantitative knowledge about demographic evolution of early mankind. In particular, we show that a mitochondrial Eve may exist even in an exponentially growing population, provided that the average number of children per individual is constrained to a small range depending on the probability $p$ that a newborn child is a female.

  11. COMPARATIVE ANALYSIS OF SOME EXISTING KINETIC MODELS ...

    African Journals Online (AJOL)

    In terms of highest values of R2, first proposed model accounted for 46.7%, Pseudo second-order kinetics model 40% while Elovich, Webber-Morris and second proposed kinetic models accounted for 6.7% respectively of the total results for biosorption of the three heavy metals by five selected microorganisms. But based ...

  12. THE FLAT TAX - A COMPARATIVE STUDY OF THE EXISTING MODELS

    Directory of Open Access Journals (Sweden)

    Schiau (Macavei Laura - Liana

    2011-07-01

    Full Text Available In the two last decades the flat tax systems have spread all around the globe from East and Central Europe to Asia and Central America. Many specialists consider this phenomenon a real fiscal revolution, but others see it as a mistake as long as the new systems are just a feint of the true flat tax designed by the famous Stanford University professors Robert Hall and Alvin Rabushka. In this context this paper tries to determine which of the existing flat tax systems resemble the true flat tax model by comparing and contrasting their main characteristics with the features of the model proposed by Hall and Rabushka. The research also underlines the common features and the differences between the existing models. The idea of this kind of study is not really new, others have done it but the comparison was limited to one country. For example Emil Kalchev from New Bulgarian University has asses the Bulgarian income system, by comparing it with the flat tax and concluding that taxation in Bulgaria is not simple, neutral and non-distortive. Our research is based on several case studies and on compare and contrast qualitative and quantitative methods. The study starts form the fiscal design drawn by the two American professors in the book The Flat Tax. Four main characteristics of the flat tax system were chosen in order to build the comparison: fiscal design, simplicity, avoidance of double taxation and uniformity of the tax rates. The jurisdictions chosen for the case study are countries all around the globe with fiscal systems which are considered flat tax systems. The results obtained show that the fiscal design of Hong Kong is the only flat tax model which is built following an economic logic and not a legal sense, being in the same time a simple and transparent system. Others countries as Slovakia, Albania, Macedonia in Central and Eastern Europe fulfill the requirement regarding the uniformity of taxation. Other jurisdictions avoid the double

  13. Quantitative criteria to benchmark new and existing bio-inks for cell compatibility.

    Science.gov (United States)

    Dubbin, Karen; Tabet, Anthony; Heilshorn, Sarah C

    2017-09-01

    protocols offer a convenient means to quantitatively benchmark their performance against existing inks.

  14. Quantitative structure - mesothelioma potency model ...

    Science.gov (United States)

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  15. A coupled chemotaxis-fluid model: Global existence

    KAUST Repository

    Liu, Jian-Guo

    2011-09-01

    We consider a model arising from biology, consisting of chemotaxis equations coupled to viscous incompressible fluid equations through transport and external forcing. Global existence of solutions to the Cauchy problem is investigated under certain conditions. Precisely, for the chemotaxis-Navier- Stokes system in two space dimensions, we obtain global existence for large data. In three space dimensions, we prove global existence of weak solutions for the chemotaxis-Stokes system with nonlinear diffusion for the cell density.© 2011 Elsevier Masson SAS. All rights reserved.

  16. Quantitative Analysis of Hohlraum Energetics Modeling

    Science.gov (United States)

    Patel, Mehul V.; Mauche, Christopher W.; Jones, Odgen S.; Scott, Howard A.

    2016-10-01

    New 1D/2D hohlraum models have been developed to enable quantitative studies of ICF hohlraum energetics. The models employ sufficient numerical resolution (spatial, temporal discetization, radiation energy groups, laser rays, IMC photons) to satisfy a priori convergence criteria on the observables to be compared. For example, we aim for numerical errors of less than 5% in the predicted X-ray flux. Post shot simulations using the new models provide quantitative assessments of the accuracy of energetics modeling across a range of ICF platforms. The models have also been used to reexamine physics sensitivities in the modeling of the NLTE wall plasma. This work is guiding improvements in the underlying DCA atomic physics models and the radiation hydrodynamics code (HYDRA). Prepared by LLNL under Contract DE-AC52-07NA27344.

  17. The interaction of de novo and pre-existing aortic regurgitation after TAVI: insights from a new quantitative aortographic technique

    NARCIS (Netherlands)

    Tateishi, Hiroki; Abdelghani, Mohammad; Cavalcante, Rafael; Miyazaki, Yosuke; Campos, Carlos M.; Collet, Carlos; Slots, Tristan L. B.; Leite, Rogério S.; Mangione, José A.; Abizaid, Alexandre; Soliman, Osama I. I.; Spitzer, Ernest; Onuma, Yoshinobu; Serruys, Patrick W.; Lemos, Pedro A.; de Brito, Fabio S.

    2017-01-01

    The aim of this study was to evaluate the intermediate-term clinical impact of aortic regurgitation (AR) after transcatheter aortic valve implantation (TAVI) using a novel quantitative angiographic method taking into account the influence of pre-existing AR. AR after TAVI was quantified in 338

  18. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  19. Existence of global attractor for the Trojan Y Chromosome model

    Directory of Open Access Journals (Sweden)

    Xiaopeng Zhao

    2012-04-01

    Full Text Available This paper is concerned with the long time behavior of solution for the equation derived by the Trojan Y Chromosome (TYC model with spatial spread. Based on the regularity estimates for the semigroups and the classical existence theorem of global attractors, we prove that this equations possesses a global attractor in $H^k(\\Omega^4$ $(k\\geq 0$ space.

  20. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  1. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  2. Quantitative Analysis of Existing Conditions and Production Strategies for the Baca Geothermal System, New Mexico

    Science.gov (United States)

    Faust, Charles R.; Mercer, James W.; Thomas, Stephen D.; Balleau, W. Pete

    1984-05-01

    The Baca geothermal reservoir and adjacent aquifers in the Jemez Mountains of New Mexico comprise an integrated hydrogeologic system. Analysis of the geothermal reservoir either under natural conditions or subject to proposed development should account for the mass (water) and energy (heat) balances of adjacent aquifers as well as the reservoir itself. A three-dimensional model based on finite difference approximations is applied to this integrated system. The model simulates heat transport associated with the flow of steam and water through an equivalent porous medium. The Baca geothermal reservoir is dominated by flow in fractures and distinct strata, but at the scale of application the equivalent porous media concept is appropriate. The geothermal reservoir and adjacent aquifers are simulated under both natural conditions and proposed production strategies. Simulation of natural conditions compares favorably with observed pressure, temperature, and thermal discharge data. The history matching simulations show that the results used for comparison are most sensitive to vertical permeability and the area of an assumed high-permeability zone connecting the reservoir to a deep hydrothermal source. Simulations using proposed production strategies and optimistic estimates of certain hydrologic parameters and reservoir extent indicate that a 50-MW power plant could be maintained for a period greater than 30 years. This production, however, will result in significant decreases in the total water discharge to the Jemez River.

  3. Expert judgement models in quantitative risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rosqvist, T. [VTT Automation, Helsinki (Finland); Tuominen, R. [VTT Automation, Tampere (Finland)

    1999-12-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed.

  4. Existence of Periodic Solutions for a Modified Growth Solow Model

    Science.gov (United States)

    Fabião, Fátima; Borges, Maria João

    2010-10-01

    In this paper we analyze the dynamic of the Solow growth model with a Cobb-Douglas production function. For this purpose, we consider that the labour growth rate, L'(t)/L(t), is a T-periodic function, for a fixed positive real number T. We obtain the closed form solutions for the fundamental Solow equation with the new description of L(t). Using notions of the qualitative theory of ordinary differential equations and nonlinear functional analysis, we prove that there exists one T-periodic solution for the Solow equation. From the economic point of view this is a new result which allows a more realistic interpretation of the stylized facts.

  5. Global quantitative modeling of chromatin factor interactions.

    Directory of Open Access Journals (Sweden)

    Jian Zhou

    2014-03-01

    Full Text Available Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the "chromatin codes" remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles--we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions.

  6. Global Quantitative Modeling of Chromatin Factor Interactions

    Science.gov (United States)

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  7. A transformative model for undergraduate quantitative biology education.

    Science.gov (United States)

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  8. Quantitative risk assessment modeling for nonhomogeneous urban road tunnels.

    Science.gov (United States)

    Meng, Qiang; Qu, Xiaobo; Wang, Xinchang; Yuanita, Vivi; Wong, Siew Chee

    2011-03-01

    Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out. © 2010 Society for Risk Analysis.

  9. First Principles Quantitative Modeling of Molecular Devices

    Science.gov (United States)

    Ning, Zhanyu

    In this thesis, we report theoretical investigations of nonlinear and nonequilibrium quantum electronic transport properties of molecular transport junctions from atomistic first principles. The aim is to seek not only qualitative but also quantitative understanding of the corresponding experimental data. At present, the challenges to quantitative theoretical work in molecular electronics include two most important questions: (i) what is the proper atomic model for the experimental devices? (ii) how to accurately determine quantum transport properties without any phenomenological parameters? Our research is centered on these questions. We have systematically calculated atomic structures of the molecular transport junctions by performing total energy structural relaxation using density functional theory (DFT). Our quantum transport calculations were carried out by implementing DFT within the framework of Keldysh non-equilibrium Green's functions (NEGF). The calculated data are directly compared with the corresponding experimental measurements. Our general conclusion is that quantitative comparison with experimental data can be made if the device contacts are correctly determined. We calculated properties of nonequilibrium spin injection from Ni contacts to octane-thiolate films which form a molecular spintronic system. The first principles results allow us to establish a clear physical picture of how spins are injected from the Ni contacts through the Ni-molecule linkage to the molecule, why tunnel magnetoresistance is rapidly reduced by the applied bias in an asymmetric manner, and to what extent ab initio transport theory can make quantitative comparisons to the corresponding experimental data. We found that extremely careful sampling of the two-dimensional Brillouin zone of the Ni surface is crucial for accurate results in such a spintronic system. We investigated the role of contact formation and its resulting structures to quantum transport in several molecular

  10. Automated quantitative drug susceptibility testing of non-tuberculous mycobacteria using MGIT 960/EpiCenter TB eXiST.

    Science.gov (United States)

    Lucke, Katja; Hombach, Michael; Friedel, Ute; Ritter, Claudia; Böttger, Erik C

    2012-01-01

    To assess the predictive value of in vitro drug susceptibility testing (DST) in slow-growing non-tuberculous mycobacteria (NTM), knowledge on quantitative levels of drug susceptibility should be available. The aim of this study was to investigate the suitability of the MGIT 960/TB eXiST system for quantitative DST of NTM. We have assessed quantitative levels of drug susceptibility for clinical isolates of Mycobacterium avium, Mycobacterium intracellulare and Mycobacterium kansasii by comparing radiometric Bactec 460TB-based DST with non-radiometric DST using MGIT 960/TB eXiST. MGIT 960/TB eXiST gives results comparable to those of Bactec 460TB. The MGIT 960/TB eXiST appears suitable for quantitative DST of NTM.

  11. Toward quantitative modeling of silicon phononic thermocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  12. comparative analysis of some existing kinetic models with proposed ...

    African Journals Online (AJOL)

    IGNATIUS NWIDI

    But based on values of ARE%, first proposed kinetic model accounted for 93.3% while pseudo second-order kinetic model accounted for 6.7% of the results for biosorption of the three heavy metals by the five microbes. Keynotes: Heavy metals, Biosorption, Kinetics Models, Comparative analysis, Average Relative Error. 1.

  13. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 3. Hidden Markov Model for quantitative ... A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in ...

  14. Global existence result for the generalized Peterlin viscoelastic model

    Czech Academy of Sciences Publication Activity Database

    Lukáčová-Medviďová, M.; Mizerová, H.; Nečasová, Šárka; Renardy, M.

    2017-01-01

    Roč. 49, č. 4 (2017), s. 2950-2964 ISSN 0036-1410 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : Peterlin viscoelastic equations * global existence * weak solutions Subject RIV: BA - General Mathematics Impact factor: 1.648, year: 2016 http://epubs.siam.org/doi/abs/10.1137/16M1068505

  15. Survey and Evaluation of Existing Smoke Movement Models.

    Science.gov (United States)

    1987-11-01

    Waterhouse, BRI, Cal-Tech, Dayton, NBS-I and NBS-II. P The Klote Model This " smoke control " model is not really a fire model (9), but rather provides the...Heat and Smoke Movement in Enclosure Fires", Fire Safety Journal, 1983, V. 6, pp. 193-21. 9. Klote, J. and J. Fothergeil , Jr., "Design of Smoke Control Systems

  16. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  17. Nursing in disasters: A review of existing models.

    Science.gov (United States)

    Pourvakhshoori, Negar; Norouzi, Kian; Ahmadi, Fazlollah; Hosseini, Mohammadali; Khankeh, Hamidreza

    2017-03-01

    Since nurses play an important role in responding to disasters, evaluating their knowledge on common patterns of disasters is a necessity. This study examined researches conducted using disaster nursing as well as the models adopted. It provides a critical analysis of the models available for disaster nursing. International electronic databases including Scopus, PubMed, ISI Web of Science, Cochrane Library, Cumulative Index to Nursing and Allied Health (CINAHL), and Google Scholar were investigated with no limitation on type of articles, between 1st January 1980 and 31st January 2016. The search terms and strategy were as follows: (Disaster∗ OR Emergenc∗) AND (Model OR Theory OR Package OR Pattern) AND (Nursing OR Nurse∗). They were applied for titles, abstracts and key words. This resulted in the generation of disaster nursing models. Out of the 1983 publications initially identified, the final analysis was conducted on 8 full text articles. These studies presented seven models. These evinced a diverse set of models with regard to the domains and the target population. Although, disaster nursing models will inform disaster risk reduction strategies, attempts to systematically do so are in preliminary phases. Further investigation is needed to develop a domestic nursing model in the event of disasters. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Managing Tensions Between New and Existing Business Models

    DEFF Research Database (Denmark)

    Sund, Kristian J.; Bogers, Marcel; Villarroel Fernandez, Juan Andrei

    2016-01-01

    Exploring new business models may be a good way to stay competitive, but doing so can create tensions internally, in areas such as organizational structure and competition for resources. Companies exploring business model innovation may not recognize the inevitability of these tensions and thus...

  19. How Can Blockchain Technology Disrupt the Existing Business Models?

    Directory of Open Access Journals (Sweden)

    Witold Nowiński

    2017-09-01

    Contribution & Value Added: This study provides an analysis of the possible impact of blockchain technology on business model innovation. Blockchain technology is gaining momentum with more and more diverse applications, as well as increasing numbers of actors involved in its applications. This paper contributes to our understanding of the possible applications of blockchain technology to businesses, and in particular to its impact on business models.

  20. Quantitative consensus of bioaccumulation models for integrated testing strategies.

    Science.gov (United States)

    Fernández, Alberto; Lombardo, Anna; Rallo, Robert; Roncaglioni, Alessandra; Giralt, Francesc; Benfenati, Emilio

    2012-09-15

    A quantitative consensus model based on bioconcentration factor (BCF) predictions obtained from five quantitative structure-activity relationship models was developed for bioaccumulation assessment as an integrated testing approach for waiving. Three categories were considered: non-bioaccumulative, bioaccumulative and very bioaccumulative. Five in silico BCF models were selected and included into a quantitative consensus model by means of the continuous formulation of Bayes' theorem. The discrete likelihoods commonly used in the qualitative Bayesian model were substituted by probability density functions to reduce the loss of information that occurred when continuous BCF values were distributed across the three bioaccumulation categories. Results showed that the continuous Bayesian model yielded the best classification predictions compared not only to the discrete Bayesian model, but also to the individual BCF models. The proposed quantitative consensus model proved to be a suitable approach for integrated testing strategies for continuous endpoints of environmental interest. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Hyperbolic Plykin attractor can exist in neuron models

    DEFF Research Database (Denmark)

    Belykh, V.; Belykh, I.; Mosekilde, Erik

    2005-01-01

    Strange hyperbolic attractors are hard to find in real physical systems. This paper provides the first example of a realistic system, a canonical three-dimensional (3D) model of bursting neurons, that is likely to have a strange hyperbolic attractor. Using a geometrical approach to the study...... of the neuron model, we derive a flow-defined Poincare map giving ail accurate account of the system's dynamics. In a parameter region where the neuron system undergoes bifurcations causing transitions between tonic spiking and bursting, this two-dimensional map becomes a map of a disk with several periodic...... holes. A particular case is the map of a disk with three holes, matching the Plykin example of a planar hyperbolic attractor. The corresponding attractor of the 3D neuron model appears to be hyperbolic (this property is not verified in the present paper) and arises as a result of a two-loop (secondary...

  2. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  3. Determining if Instructional Delivery Model Differences Exist in Remedial English

    Science.gov (United States)

    Carter, LaTanya Woods

    2012-01-01

    The purpose of this causal comparative study is to test the theory of no significant difference that compares pre- and post-test assessment scores, controlling for the instructional delivery model of online and face-to-face students at a Mid-Atlantic university. Online education and virtual distance learning programs have increased in popularity…

  4. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    Science.gov (United States)

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.

  5. Multi-criteria decision model for retrofitting existing buildings

    Science.gov (United States)

    Bostenaru Dan, M. D.

    2004-08-01

    Decision is an element in the risk management process. In this paper the way how science can help in decision making and implementation for retrofitting buildings in earthquake prone urban areas is investigated. In such interventions actors from various spheres are involved. Their interests range among minimising the intervention for maximal preservation or increasing it for seismic safety. Research was conducted to see how to facilitate collaboration between these actors. A particular attention was given to the role of time in actors' preferences. For this reason, on decision level, both the processural and the personal dimension of risk management, the later seen as a task, were considered. A systematic approach was employed to determine the functional structure of a participative decision model. Three layers on which actors implied in this multi-criteria decision problem interact were identified: town, building and element. So-called 'retrofit elements' are characteristic bearers in the architectural survey, engineering simulations, costs estimation and define the realms perceived by the inhabitants. This way they represent an interaction basis for the interest groups considered in a deeper study. Such orientation means for actors' interaction were designed on other levels of intervention as well. Finally, an 'experiment' for the implementation of the decision model is presented: a strategic plan for an urban intervention towards reduction of earthquake hazard impact through retrofitting. A systematic approach proves thus to be a very good communication basis among the participants in the seismic risk management process. Nevertheless, it can only be applied in later phases (decision, implementation, control) only, since it serves verifying and improving solution and not developing the concept. The 'retrofit elements' are a typical example of the detailing degree reached in the retrofit design plans in these phases.

  6. Multi-criteria decision model for retrofitting existing buildings

    Directory of Open Access Journals (Sweden)

    M. D. Bostenaru Dan

    2004-01-01

    Full Text Available Decision is an element in the risk management process. In this paper the way how science can help in decision making and implementation for retrofitting buildings in earthquake prone urban areas is investigated. In such interventions actors from various spheres are involved. Their interests range among minimising the intervention for maximal preservation or increasing it for seismic safety. Research was conducted to see how to facilitate collaboration between these actors. A particular attention was given to the role of time in actors' preferences. For this reason, on decision level, both the processural and the personal dimension of risk management, the later seen as a task, were considered. A systematic approach was employed to determine the functional structure of a participative decision model. Three layers on which actors implied in this multi-criteria decision problem interact were identified: town, building and element. So-called 'retrofit elements' are characteristic bearers in the architectural survey, engineering simulations, costs estimation and define the realms perceived by the inhabitants. This way they represent an interaction basis for the interest groups considered in a deeper study. Such orientation means for actors' interaction were designed on other levels of intervention as well. Finally, an 'experiment' for the implementation of the decision model is presented: a strategic plan for an urban intervention towards reduction of earthquake hazard impact through retrofitting. A systematic approach proves thus to be a very good communication basis among the participants in the seismic risk management process. Nevertheless, it can only be applied in later phases (decision, implementation, control only, since it serves verifying and improving solution and not developing the concept. The 'retrofit elements' are a typical example of the detailing degree reached in the retrofit design plans in these phases.

  7. Developing Quantitative Models for Auditing Journal Entries

    OpenAIRE

    Argyrou, Argyris

    2013-01-01

    The thesis examines how the auditing of journal entries can detect and prevent financial statement fraud. Financial statement fraud occurs when an intentional act causes financial statements to be materially misstated. Although it is not a new phenomenon, financial statement fraud has attracted much publicity in the wake of numerous cases of financial malfeasance (e.g. ENRON, WorldCom). Existing literature has provided limited empirical evidence on the link between auditing journal entrie...

  8. Logic Modeling in Quantitative Systems Pharmacology.

    Science.gov (United States)

    Traynard, Pauline; Tobalina, Luis; Eduati, Federica; Calzone, Laurence; Saez-Rodriguez, Julio

    2017-08-01

    Here we present logic modeling as an approach to understand deregulation of signal transduction in disease and to characterize a drug's mode of action. We discuss how to build a logic model from the literature and experimental data and how to analyze the resulting model to obtain insights of relevance for systems pharmacology. Our workflow uses the free tools OmniPath (network reconstruction from the literature), CellNOpt (model fit to experimental data), MaBoSS (model analysis), and Cytoscape (visualization). © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  9. A quantitative model for designing keyboard layout.

    Science.gov (United States)

    Shieh, K K; Lin, C C

    1999-02-01

    This study analyzed the quantitative relationship between keytapping times and ergonomic principles in typewriting skills. Keytapping times and key-operating characteristics of a female subject typing on the Qwerty and Dvorak keyboards for six weeks each were collected and analyzed. The results showed that characteristics of the typed material and the movements of hands and fingers were significantly related to keytapping times. The most significant factors affecting keytapping times were association frequency between letters, consecutive use of the same hand or finger, and the finger used. A regression equation for relating keytapping times to ergonomic principles was fitted to the data. Finally, a protocol for design of computerized keyboard layout based on the regression equation was proposed.

  10. A Transformative Model for Undergraduate Quantitative Biology Education

    Science.gov (United States)

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…

  11. Towards a quantitative model of the post-synaptic proteome.

    Science.gov (United States)

    Sorokina, Oksana; Sorokin, Anatoly; Armstrong, J Douglas

    2011-10-01

    The postsynaptic compartment of the excitatory glutamatergic synapse contains hundreds of distinct polypeptides with a wide range of functions (signalling, trafficking, cell-adhesion, etc.). Structural dynamics in the post-synaptic density (PSD) are believed to underpin cognitive processes. Although functionally and morphologically diverse, PSD proteins are generally enriched with specific domains, which precisely define the mode of clustering essential for signal processing. We applied a stochastic calculus of domain binding provided by a rule-based modelling approach to formalise the highly combinatorial signalling pathway in the PSD and perform the numerical analysis of the relative distribution of protein complexes and their sizes. We specified the combinatorics of protein interactions in the PSD by rules, taking into account protein domain structure, specific domain affinity and relative protein availability. With this model we interrogated the critical conditions for the protein aggregation into large complexes and distribution of both size and composition. The presented approach extends existing qualitative protein-protein interaction maps by considering the quantitative information for stoichiometry and binding properties for the elements of the network. This results in a more realistic view of the postsynaptic proteome at the molecular level.

  12. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    , allowing verification procedures to quantify judgements, on how suitable a model is for a given specification — hence mitigating the usual harsh distinction between satisfactory and non-satisfactory system designs. This information, among other things, allows us to evaluate the robustness of our framework......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...

  13. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  14. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    OpenAIRE

    Cobbs, Gary

    2012-01-01

    Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most pote...

  15. A quantitative model of cellular elasticity based on tensegrity.

    Science.gov (United States)

    Stamenović, D; Coughlin, M F

    2000-02-01

    A tensegrity structure composed of six struts interconnected with 24 elastic cables is used as a quantitative model of the steady-state elastic response of cells, with the struts and cables representing microtubules and actin filaments, respectively. The model is stretched uniaxially and the Young's modulus (E0) is obtained from the initial slope of the stress versus strain curve of an equivalent continuum. It is found that E0 is directly proportional to the pre-existing tension in the cables (or compression in the struts) and inversely proportional to the cable (or strut) length square. This relationship is used to predict the upper and lower bounds of E0 of cells, assuming that the cable tension equals the yield force of actin (approximately 400 pN) for the upper bound, and that the strut compression equals the critical buckling force of microtubules for the lower bound. The cable (or strut) length is determined from the assumption that model dimensions match the diameter of probes used in standard mechanical tests on cells. Predicted values are compared to reported data for the Young's modulus of various cells. If the probe diameter is greater than or equal to 3 microns, these data are closer to the lower bound than to the upper bound. This, in turn, suggests that microtubules of the CSK carry initial compression that exceeds their critical buckling force (order of 10(0)-10(1) pN), but is much smaller than the yield force of actin. If the probe diameter is less than or equal to 2 microns, experimental data fall outside the region defined by the upper and lower bounds.

  16. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  17. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  18. Existence of almost periodic solution of a model of phytoplankton allelopathy with delay

    Science.gov (United States)

    Abbas, Syed; Mahto, Lakshman

    2012-09-01

    In this paper we discuss a non-autonomous two species competitive allelopathic phytoplankton model in which both species are producing chemical which stimulate the growth of each other. We have studied the existence and uniqueness of an almost periodic solution for the concerned model system. Sufficient conditions are derived for the existence of a unique almost periodic solution.

  19. Existence of periodic solutions in a model of respiratory syncytial virus RSV

    Science.gov (United States)

    Arenas, Abraham J.; González, Gilberto; Jódar, Lucas

    2008-08-01

    In this paper we study the existence of a positive periodic solutions for nested models of respiratory syncytial virus RSV, by using a continuation theorem based on coincidence degree theory. Conditions for the existence of periodic solutions in the model are given. Numerical simulations related to the transmission of respiratory syncytial virus in Madrid and Rio Janeiro are included.

  20. Generalized PSF modeling for optimized quantitation in PET imaging

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman

    2017-06-01

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  1. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  2. Refining the quantitative pathway of the Pathways to Mathematics model.

    Science.gov (United States)

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Quantitative assessment of islet cell products: estimating the accuracy of the existing protocol and accounting for islet size distribution.

    Science.gov (United States)

    Buchwald, Peter; Wang, Xiaojing; Khan, Aisha; Bernal, Andres; Fraker, Chris; Inverardi, Luca; Ricordi, Camillo

    2009-01-01

    The ability to consistently and reliably assess the total number and the size distribution of isolated pancreatic islet cells from a small sample is of crucial relevance for the adequate characterization of islet cell preparations used for research or transplantation purposes. Here, data from a large number of isolations were used to establish a continuous probability density function describing the size distribution of human pancreatic islets. This function was then used to generate a polymeric microsphere mixture with a composition resembling those of isolated islets, which, in turn, was used to quantitatively assess the accuracy, reliability, and operator-dependent variability of the currently utilized manual standard procedure of quantification of islet cell preparation. Furthermore, on the basis of the best fit probability density function, which corresponds to a Weibull distribution, a slightly modified scale of islet equivalent number (IEQ) conversion factors is proposed that incorporates the size distribution of islets and accounts for the decreasing probability of finding larger islets within each size group. Compared to the current calculation method, these factors introduce a 4-8% downward correction of the total IEQ estimate, but they reflect a statistically more accurate contribution of differently sized islets.

  4. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis : Model development and validation of existing models

    NARCIS (Netherlands)

    Gomes, Anna; van der Wijk, Lars; Proost, Johannes H; Sinha, Bhanu; Touw, Daan J

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for

  5. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  6. Application of a New Model of Peer Group Influence to Naturally Existing Adolescent Friendship Groups

    Science.gov (United States)

    Siman, Michael L.

    1977-01-01

    A new model of the peer group influence process was proposed and tested using questionnaire responses from 41 naturally existing adolescent cliques representing males and females in grades 6 through 12. (Author/JMB)

  7. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    Science.gov (United States)

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  8. Parts of the Whole: Strategies for the Spread of Quantitative Literacy: What Models Can Tell Us

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2014-07-01

    Full Text Available Two conceptual frameworks, one from graph theory and one from dynamical systems, have been offered as explanations for complex phenomena in biology and also as possible models for the spread of ideas. The two models are based on different assumptions and thus predict quite different outcomes for the fate of either biological species or ideas. We argue that, depending on the culture in which they exist, one can identify which model is more likely to reflect the survival of two competing ideas. Based on this argument we suggest how two strategies for embedding and normalizing quantitative literacy in a given institution are likely to succeed or fail.

  9. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    Science.gov (United States)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  10. Phase-Field Formulation for Quantitative Modeling of Alloy Solidification

    Energy Technology Data Exchange (ETDEWEB)

    Karma, Alain

    2001-09-10

    A phase-field formulation is introduced to simulate quantitatively microstructural pattern formation in alloys. The thin-interface limit of this formulation yields a much less stringent restriction on the choice of interface thickness than previous formulations and permits one to eliminate nonequilibrium effects at the interface. Dendrite growth simulations with vanishing solid diffusivity show that both the interface evolution and the solute profile in the solid are accurately modeled by this approach.

  11. Phase-Field Formulation for Quantitative Modeling of Alloy Solidification

    Science.gov (United States)

    Karma, Alain

    2001-09-01

    A phase-field formulation is introduced to simulate quantitatively microstructural pattern formation in alloys. The thin-interface limit of this formulation yields a much less stringent restriction on the choice of interface thickness than previous formulations and permits one to eliminate nonequilibrium effects at the interface. Dendrite growth simulations with vanishing solid diffusivity show that both the interface evolution and the solute profile in the solid are accurately modeled by this approach.

  12. A Review on Quantitative Models for Sustainable Food Logistics Management

    Directory of Open Access Journals (Sweden)

    M. Soysal

    2012-12-01

    Full Text Available The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM. A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics.

  13. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    Science.gov (United States)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  14. QuantUM: Quantitative Safety Analysis of UML Models

    Directory of Open Access Journals (Sweden)

    Florian Leitner-Fischer

    2011-07-01

    Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.

  15. Quantitative magnetospheric models derived from spacecraft magnetometer data

    Science.gov (United States)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  16. The methodology for the existing complex pneumatic systems efficiency increase with the use of mathematical modeling

    Science.gov (United States)

    Danilishin, A. M.; Kartashov, S. V.; Kozhukhov, Y. V.; Kozin, E. G.

    2017-08-01

    The method for the existing complex pneumatic systems efficiency increase has been developed, including the survey steps, mathematical technological process modeling, optimizing the pneumatic system configuration, its operation modes, selection of optimal compressor units and additional equipment. Practical application of the methodology is considered by the example of the existing pneumatic systems underground depot reconstruction. The first stage of the methodology is the survey of acting pneumatic system. The second stage of technique is multivariable mathematical modeling of the pneumatic system operation. The developed methodology is applicable to complex pneumatic systems.

  17. Quantitative modeling of transcription factor binding specificities using DNA shape.

    Science.gov (United States)

    Zhou, Tianyin; Shen, Ning; Yang, Lin; Abe, Namiko; Horton, John; Mann, Richard S; Bussemaker, Harmen J; Gordân, Raluca; Rohs, Remo

    2015-04-14

    DNA binding specificities of transcription factors (TFs) are a key component of gene regulatory processes. Underlying mechanisms that explain the highly specific binding of TFs to their genomic target sites are poorly understood. A better understanding of TF-DNA binding requires the ability to quantitatively model TF binding to accessible DNA as its basic step, before additional in vivo components can be considered. Traditionally, these models were built based on nucleotide sequence. Here, we integrated 3D DNA shape information derived with a high-throughput approach into the modeling of TF binding specificities. Using support vector regression, we trained quantitative models of TF binding specificity based on protein binding microarray (PBM) data for 68 mammalian TFs. The evaluation of our models included cross-validation on specific PBM array designs, testing across different PBM array designs, and using PBM-trained models to predict relative binding affinities derived from in vitro selection combined with deep sequencing (SELEX-seq). Our results showed that shape-augmented models compared favorably to sequence-based models. Although both k-mer and DNA shape features can encode interdependencies between nucleotide positions of the binding site, using DNA shape features reduced the dimensionality of the feature space. In addition, analyzing the feature weights of DNA shape-augmented models uncovered TF family-specific structural readout mechanisms that were not revealed by the DNA sequence. As such, this work combines knowledge from structural biology and genomics, and suggests a new path toward understanding TF binding and genome function.

  18. Existence and uniqueness of solutions from the LEAP equilibrium energy-economy model

    Energy Technology Data Exchange (ETDEWEB)

    Oblow, E.M.

    1982-10-01

    A study was made of the existence and uniqueness of solutions to the long-range, energy-economy model LEAP. The code is a large scale, long-range (50 year) equilibrium model of energy supply and demand in the US economy used for government and industrial forecasting. The study focused on the two features which distinguish LEAP from other equilibrium models - the treatment of product allocation and basic conversion of materials into an energy end product. Both allocation and conversion processes are modeled in a behavioral fashion which differs from classical economic paradigms. The results of the study indicate that while LEAP contains desirable behavioral features, these same features can give rise to non-uniqueness in the solution of allocation and conversion process equations. Conditions under which existence and uniqueness of solutions might not occur are developed in detail and their impact in practical applications are discussed.

  19. Identifying best existing practice for characterization modeling in life cycle impact assessment

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Goedkoop, Mark; Guinée, Jeroen

    2013-01-01

    identification of the best among the existing characterization models. If the identified model was of sufficient quality, it was recommended by the JRC. Analysis and recommendation process involved hearing of both scientific experts and stakeholders. Results and recommendations: Recommendations were developed...... continents and still support aggregation of impact scores over the whole life cycle. For the impact categories human toxicity and ecotoxicity, we are now able to recommend a model, but the number of chemical substances in common use is so high that there is a need to address the substance data shortage...... and impact. The LCA standard ISO 14044 is rather general and unspecific in its requirements and offers little help to the LCA practitioner who needs to make a choice. With the aim to identify the best among existing characterization models and provide recommendations to the LCA practitioner, a study...

  20. Existence of standard models of conic fibrations over non-algebraically-closed fields

    Energy Technology Data Exchange (ETDEWEB)

    Avilov, A A [National Research University " Higher School of Economics" , Moscow (Russian Federation)

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  1. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    Science.gov (United States)

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-11-06

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse.

  2. Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models

    Directory of Open Access Journals (Sweden)

    Glen P. Martin

    2017-01-01

    Full Text Available Abstract Background Clinical prediction models (CPMs are increasingly deployed to support healthcare decisions but they are derived inconsistently, in part due to limited data. An emerging alternative is to aggregate existing CPMs developed for similar settings and outcomes. This simulation study aimed to investigate the impact of between-population-heterogeneity and sample size on aggregating existing CPMs in a defined population, compared with developing a model de novo. Methods Simulations were designed to mimic a scenario in which multiple CPMs for a binary outcome had been derived in distinct, heterogeneous populations, with potentially different predictors available in each. We then generated a new ‘local’ population and compared the performance of CPMs developed for this population by aggregation, using stacked regression, principal component analysis or partial least squares, with redevelopment from scratch using backwards selection and penalised regression. Results While redevelopment approaches resulted in models that were miscalibrated for local datasets of less than 500 observations, model aggregation methods were well calibrated across all simulation scenarios. When the size of local data was less than 1000 observations and between-population-heterogeneity was small, aggregating existing CPMs gave better discrimination and had the lowest mean square error in the predicted risks compared with deriving a new model. Conversely, given greater than 1000 observations and significant between-population-heterogeneity, then redevelopment outperformed the aggregation approaches. In all other scenarios, both aggregation and de novo derivation resulted in similar predictive performance. Conclusion This study demonstrates a pragmatic approach to contextualising CPMs to defined populations. When aiming to develop models in defined populations, modellers should consider existing CPMs, with aggregation approaches being a suitable modelling

  3. Using Mathematical Modeling and Set-Based Design Principles to Recommend an Existing CVL Design

    Science.gov (United States)

    2017-09-01

    David, Norbert Doerry, and Michael Buckley. 2009. “What Is Set Based Design ?” ASNE Naval Engineers Journal 121, no.4: 31–43. Sobek, Durward K. 1997...MATHEMATICAL MODELING AND SET-BASED DESIGN PRINCIPLES TO RECOMMEND AN EXISTING CVL DESIGN by William H. Ehlies September 2017 Thesis Advisor...September 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE USING MATHEMATICAL MODELING AND SET-BASED DESIGN PRINCIPLES

  4. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  5. Uncertainty Model For Quantitative Precipitation Estimation Using Weather Radars

    Directory of Open Access Journals (Sweden)

    Ernesto Gómez Vargas

    2016-06-01

    Full Text Available This paper introduces an uncertainty model for the quantitatively estimate precipitation using weather radars. The model considers various key aspects associated to radar calibration, attenuation, and the tradeoff between accuracy and radar coverage. An S-band-radar case study is presented to illustrate particular fractional-uncertainty calculations obtained to adjust various typical radar-calibration elements such as antenna, transmitter, receiver, and some other general elements included in the radar equation. This paper is based in “Guide to the expression of Uncertainty in measurement” and the results show that the fractional uncertainty calculated by the model was 40 % for the reflectivity and 30% for the precipitation using the Marshall Palmer Z-R relationship.

  6. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  7. Quantitative modeling of a gene's expression from its intergenic sequence.

    Science.gov (United States)

    Samee, Md Abul Hassan; Sinha, Saurabh

    2014-03-01

    Modeling a gene's expression from its intergenic locus and trans-regulatory context is a fundamental goal in computational biology. Owing to the distributed nature of cis-regulatory information and the poorly understood mechanisms that integrate such information, gene locus modeling is a more challenging task than modeling individual enhancers. Here we report the first quantitative model of a gene's expression pattern as a function of its locus. We model the expression readout of a locus in two tiers: 1) combinatorial regulation by transcription factors bound to each enhancer is predicted by a thermodynamics-based model and 2) independent contributions from multiple enhancers are linearly combined to fit the gene expression pattern. The model does not require any prior knowledge about enhancers contributing toward a gene's expression. We demonstrate that the model captures the complex multi-domain expression patterns of anterior-posterior patterning genes in the early Drosophila embryo. Altogether, we model the expression patterns of 27 genes; these include several gap genes, pair-rule genes, and anterior, posterior, trunk, and terminal genes. We find that the model-selected enhancers for each gene overlap strongly with its experimentally characterized enhancers. Our findings also suggest the presence of sequence-segments in the locus that would contribute ectopic expression patterns and hence were "shut down" by the model. We applied our model to identify the transcription factors responsible for forming the stripe boundaries of the studied genes. The resulting network of regulatory interactions exhibits a high level of agreement with known regulatory influences on the target genes. Finally, we analyzed whether and why our assumption of enhancer independence was necessary for the genes we studied. We found a deterioration of expression when binding sites in one enhancer were allowed to influence the readout of another enhancer. Thus, interference between enhancer

  8. Existence of Torsional Solitons in a Beam Model of Suspension Bridge

    Science.gov (United States)

    Benci, Vieri; Fortunato, Donato; Gazzola, Filippo

    2017-11-01

    This paper studies the existence of solitons, namely stable solitary waves, in an idealized suspension bridge. The bridge is modeled as an unbounded degenerate plate, that is, a central beam with cross sections, and displays two degrees of freedom: the vertical displacement of the beam and the torsional angles of the cross sections. Under fairly general assumptions, we prove the existence of solitons. Under the additional assumption of large tension in the sustaining cables, we prove that these solitons have a nontrivial torsional component. This appears relevant for security since several suspension bridges collapsed due to torsional oscillations.

  9. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  10. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  11. Quantitative identification of technological discontinuities using simulation modeling

    CERN Document Server

    Park, Hyunseok

    2016-01-01

    The aim of this paper is to develop and test metrics to quantitatively identify technological discontinuities in a knowledge network. We developed five metrics based on innovation theories and tested the metrics by a simulation model-based knowledge network and hypothetically designed discontinuity. The designed discontinuity is modeled as a node which combines two different knowledge streams and whose knowledge is dominantly persistent in the knowledge network. The performances of the proposed metrics were evaluated by how well the metrics can distinguish the designed discontinuity from other nodes on the knowledge network. The simulation results show that the persistence times # of converging main paths provides the best performance in identifying the designed discontinuity: the designed discontinuity was identified as one of the top 3 patents with 96~99% probability by Metric 5 and it is, according to the size of a domain, 12~34% better than the performance of the second best metric. Beyond the simulation ...

  12. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  13. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  14. Exploiting linkage disequilibrium in statistical modelling in quantitative genomics

    DEFF Research Database (Denmark)

    Wang, Lei

    Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...

  15. A method to create simplified versions of existing habitat suitability index (HSI) models

    Science.gov (United States)

    Wakeley, James S.

    1988-01-01

    The habitat evaluation procedures (HEP), developed by the US Fish and Wildlife Service, are widely used in the United States to determine the impacts of major construction projects on fish and wildlife habitats. HEP relies heavily on habitat suitability index (HSI) models that use measurements of important habitat characteristics to rate habitat quality for a species on a scale of 0 (unsuitable) to 1.0 (optimal). This report describes a method to simplify existing HSI models to reduce the time and expense involved in sampling habitat variables. Simplified models for three species produced HSI values within 0.2 of those predicted by the original models 90% of the time. Simplified models are particularly useful for rapid habitat inventories and evaluations, wildlife management, and impact assessments in extensive areas or with limited time and personnel.

  16. Automated quantitative gait analysis in animal models of movement disorders

    Directory of Open Access Journals (Sweden)

    Vandeputte Caroline

    2010-08-01

    Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.

  17. On the existence of accessibility in a tree-indexed percolation model

    Science.gov (United States)

    Coletti, Cristian F.; Gava, Renato J.; Rodríguez, Pablo M.

    2018-02-01

    We study the accessibility percolation model on infinite trees. The model is defined by associating an absolute continuous random variable Xv to each vertex v of the tree. The main question to be considered is the existence or not of an infinite path of nearest neighbors v1 ,v2 ,v3 … such that Xv1 defined by the existence of such path is called percolation. We consider the case of the accessibility percolation model on a spherically symmetric tree with growth function given by f(i) = ⌈(i + 1) α ⌉ , where α > 0 is a given constant. We show that there is a percolation threshold at αc = 1 such that there is percolation if α > 1 and there is absence of percolation if α ≤ 1. Moreover, we study the event of percolation starting at any vertex, as well as the continuity of the percolation probability function. Finally, we provide a comparison between this model with the well known Fα record model. We also discuss a number of open problems concerning the accessibility percolation model for further consideration in future research.

  18. Studies on the Existence of Unstable Oscillatory Patterns Bifurcating from Hopf Bifurcations in a Turing Model

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2014-01-01

    Full Text Available We revisit a homogeneous reaction-diffusion Turing model subject to the Neumann boundary conditions in the one-dimensional spatial domain. With the help of the Hopf bifurcation theory applicable to the reaction-diffusion equations, we are capable of proving the existence of Hopf bifurcations, which suggests the existence of spatially homogeneous and nonhomogeneous periodic solutions of this particular system. In particular, we also prove that the spatial homogeneous periodic solutions bifurcating from the smallest Hopf bifurcation point of the system are always unstable. This together with the instability results of the spatially nonhomogeneous periodic solutions by Yi et al., 2009, indicates that, in this model, all the oscillatory patterns from Hopf bifurcations are unstable.

  19. Predictive Modeling of Marine Mammal Density from Existing Survey Data and Model Validation Using Upcoming Surveys

    Science.gov (United States)

    2009-05-01

    Variogram model results ............................................................................................. 14 Table 3. Range of annual sample...along with estimates of their uncertainty . xviii Although our models include most of the species found in the CCE and the ETP, sample sizes were...describing the variance seen in cetacean encounter rates. We developed new methods to estimate the uncertainty in cetacean density estimates based on

  20. An overview of existing modeling tools making use of model checking in the analysis of biochemical networks.

    Science.gov (United States)

    Carrillo, Miguel; Góngora, Pedro A; Rosenblueth, David A

    2012-01-01

    Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks.

  1. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    Science.gov (United States)

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Global existence of solutions to a tear film model with locally elevated evaporation rates

    Science.gov (United States)

    Gao, Yuan; Ji, Hangjie; Liu, Jian-Guo; Witelski, Thomas P.

    2017-07-01

    Motivated by a model proposed by Peng et al. (2014) for break-up of tear films on human eyes, we study the dynamics of a generalized thin film model. The governing equations form a fourth-order coupled system of nonlinear parabolic PDEs for the film thickness and salt concentration subject to non-conservative effects representing evaporation. We analytically prove the global existence of solutions to this model with mobility exponents in several different ranges and present numerical simulations that are in agreement with the analytic results. We also numerically capture other interesting dynamics of the model, including finite-time rupture-shock phenomenon due to the instabilities caused by locally elevated evaporation rates, convergence to equilibrium and infinite-time thinning.

  3. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  4. Dental students' reflections about long-term care experiences through an existing model of oral health.

    Science.gov (United States)

    Brondani, Mario; Pattanaporn, Komkham

    2017-09-01

    The aim of this study was to explore students' reflective thinking about long-term care experiences from the perspective of a model of oral health. A total of 186 reflections from 193 second-year undergraduate dental students enrolled between 2011/12 and 2014/15 at the University of British Columbia were explored qualitatively. Reflections had a word limit of 300, and students were asked to relate an existing model of oral health to their long-term care experiences. We have identified the main ideas via a thematic analysis related to the geriatric dentistry experience in long-term care. The thematic analysis revealed that students attempted to demystify their pre-conceived ideas about older people and long-term care facilities, to think outside the box, for example away from a typical dental office, and to consider caring for elderly people from an interprofessional lens. According to some students, not all domains from the existing model of oral health were directly relevant to their geriatric experience while other domains, including interprofessionalism and cognition, were missing. While some participants had a positive attitude towards caring for this cohort of the population, others did not take this educational activity as a constructive experience. The nature of most students' reflective thinking within a long-term care experience showed to be related to an existing model of oral health. This model can help to give meaning to the dental geriatric experience of an undergraduate curriculum. Such experience has been instrumental in overcoming potential misconceptions about long-term care and geriatric dentistry. © 2017 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  5. Quantitative genetic models of sexual selection by male choice.

    Science.gov (United States)

    Nakahashi, Wataru

    2008-09-01

    There are many examples of male mate choice for female traits that tend to be associated with high fertility. I develop quantitative genetic models of a female trait and a male preference to show when such a male preference can evolve. I find that a disagreement between the fertility maximum and the viability maximum of the female trait is necessary for directional male preference (preference for extreme female trait values) to evolve. Moreover, when there is a shortage of available male partners or variance in male nongenetic quality, strong male preference can evolve. Furthermore, I also show that males evolve to exhibit a stronger preference for females that are more feminine (less resemblance to males) than the average female when there is a sexual dimorphism caused by fertility selection which acts only on females.

  6. Towards real-time change detection in videos based on existing 3D models

    Science.gov (United States)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  7. Thai student existing understanding about the solar system model and the motion of the stars

    Science.gov (United States)

    Anantasook, Sakanan; Yuenyong, Chokchai

    2018-01-01

    The paper examined Thai student existing understanding about the solar system model and the motion of the stars. The participants included 141 Grade 9 students in four different schools of the Surin province, Thailand. Methodology regarded interpretive paradigm. The tool of interpretation included the Student Celestial Motion Conception Questionnaire (SCMCQ) and informal interview. Given understandings in the SCMCQ were read through and categorized according to students' understandings. Then, students were further probed as informal interview. Students' understandings in each category were counted and percentages computed. Finally, students' understandings across four different schools were compared and contrasted using the percentage of student responses in each category. The findings revealed that most students understand about Sun-Moon-Earth (SME) system and solar system model as well, they can use scientific explanations to explain the celestial objects in solar system and how they orbiting. Unfortunately, most of students (more than 70%) never know about the Polaris, the North Star, and 90.1% of them never know about the ecliptic, and probably also the 12 zodiac constellations. These existing understanding suggested some ideas of teaching and learning about solar system model and the motion of the stars. The paper, then, discussed some learning activities to enhance students to further construct meaning about solar system model and the motion of the stars.

  8. Quantitative Modelling of Trace Elements in Hard Coal.

    Science.gov (United States)

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  9. Quantitative Genetics Model as the Unifying Model for Defining Genomic Relationship and Inbreeding Coefficient

    Science.gov (United States)

    Wang, Chunkao; Da, Yang

    2014-01-01

    The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample. PMID:25517971

  10. Measuring and Managing Value Co-Creation Process: Overview of Existing Theoretical Models

    Directory of Open Access Journals (Sweden)

    Monika Skaržauskaitė

    2013-08-01

    Full Text Available Purpose — the article is to provide a holistic view on concept of value co-creation and existing models for measuring and managing it by conducting theoretical analysis of scientific literature sources targeting the integration of various approaches. Most important and relevant results of the literature study are presented with a focus on changed roles of organizations and consumers. This article aims at contributing theoretically to the research stream of measuring co-creation of value in order to gain knowledge for improvement of organizational performance and enabling new and innovative means of value creation. Design/methodology/approach. The nature of this research is exploratory – theoretical analysis and synthesis of scientific literature sources targeting the integration of various approaches was performed. This approach was chosen due to the absence of established theory on models of co-creation, possible uses in organizations and systematic overview of tools measuring/suggesting how to measure co-creation. Findings. While the principles of managing and measuring co-creation in regards of consumer motivation and involvement are widely researched, little attempt has been made to identify critical factors and create models dealing with organizational capabilities and managerial implications of value co-creation. Systematic analysis of literature revealed a gap not only in empirical research concerning organization’s role in co-creation process, but in theoretical and conceptual levels, too. Research limitations/implications. The limitations of this work as a literature review lies in its nature – the complete reliance on previously published research papers and the availability of these studies. For a deeper understanding of co-creation management and for developing models that can be used in real-life organizations, a broader theoretical, as well as empirical, research is necessary. Practical implications. Analysis of the

  11. Melanoma screening: Informing public health policy with quantitative modelling.

    Directory of Open Access Journals (Sweden)

    Stephen Gilmore

    Full Text Available Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in

  12. Quantitative Modeling of the Alternative Pathway of the Complement System.

    Science.gov (United States)

    Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection.

  13. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    Science.gov (United States)

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Development of an Experimental Model of Diabetes Co-Existing with Metabolic Syndrome in Rats

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar Suman

    2016-01-01

    Full Text Available Background. The incidence of metabolic syndrome co-existing with diabetes mellitus is on the rise globally. Objective. The present study was designed to develop a unique animal model that will mimic the pathological features seen in individuals with diabetes and metabolic syndrome, suitable for pharmacological screening of drugs. Materials and Methods. A combination of High-Fat Diet (HFD and low dose of streptozotocin (STZ at 30, 35, and 40 mg/kg was used to induce metabolic syndrome in the setting of diabetes mellitus in Wistar rats. Results. The 40 mg/kg STZ produced sustained hyperglycemia and the dose was thus selected for the study to induce diabetes mellitus. Various components of metabolic syndrome such as dyslipidemia (increased triglyceride, total cholesterol, LDL cholesterol, and decreased HDL cholesterol, diabetes mellitus (blood glucose, HbA1c, serum insulin, and C-peptide, and hypertension {systolic blood pressure} were mimicked in the developed model of metabolic syndrome co-existing with diabetes mellitus. In addition to significant cardiac injury, atherogenic index, inflammation (hs-CRP, decline in hepatic and renal function were observed in the HF-DC group when compared to NC group rats. The histopathological assessment confirmed presence of edema, necrosis, and inflammation in heart, pancreas, liver, and kidney of HF-DC group as compared to NC. Conclusion. The present study has developed a unique rodent model of metabolic syndrome, with diabetes as an essential component.

  15. A Note on the Existence of the Posteriors for One-way Random Effect Probit Models.

    Science.gov (United States)

    Lin, Xiaoyan; Sun, Dongchu

    2010-01-01

    The existence of the posterior distribution for one-way random effect probit models has been investigated when the uniform prior is applied to the overall mean and a class of noninformative priors are applied to the variance parameter. The sufficient conditions to ensure the propriety of the posterior are given for the cases with replicates at some factor levels. It is shown that the posterior distribution is never proper if there is only one observation at each factor level. For this case, however, a class of proper priors for the variance parameter can provide the necessary and sufficient conditions for the propriety of the posterior.

  16. Two phase modeling of nanofluid flow in existence of melting heat transfer by means of HAM

    Science.gov (United States)

    Sheikholeslami, M.; Jafaryar, M.; Bateni, K.; Ganji, D. D.

    2017-08-01

    In this article, Buongiorno Model is applied for investigation of nanofluid flow over a stretching plate in existence of magnetic field. Radiation and Melting heat transfer are taken into account. Homotopy analysis method (HAM) is selected to solve ODEs which are obtained from similarity transformation. Roles of Brownian motion, thermophoretic parameter, Hartmann number, porosity parameter, Melting parameter and Eckert number are presented graphically. Results indicate that nanofluid velocity and concentration enhance with rise of melting parameter. Nusselt number reduces with increase of porosity and melting parameters.

  17. Existence Results for a Michaud Fractional, Nonlocal, and Randomly Position Structured Fragmentation Model

    Directory of Open Access Journals (Sweden)

    Emile Franc Doungmo Goufo

    2014-01-01

    Full Text Available Until now, classical models of clusters’ fission remain unable to fully explain strange phenomena like the phenomenon of shattering (Ziff and McGrady, 1987 and the sudden appearance of infinitely many particles in some systems having initial finite number of particles. That is why there is a need to extend classical models to models with fractional derivative order and use new and various techniques to analyze them. In this paper, we prove the existence of strongly continuous solution operators for nonlocal fragmentation models with Michaud time derivative of fractional order (Samko et al., 1993. We focus on the case where the splitting rate is dependent on size and position and where new particles generating from fragmentation are distributed in space randomly according to some probability density. In the analysis, we make use of the substochastic semigroup theory, the subordination principle for differential equations of fractional order (Prüss, 1993, Bazhlekova, 2000, the analogy of Hille-Yosida theorem for fractional model (Prüss, 1993, and useful properties of Mittag-Leffler relaxation function (Berberan-Santos, 2005. We are then able to show that the solution operator to the full model is positive and contractive.

  18. Humanization of pediatric care in the world: focus and review of existing models and measurement tools.

    Science.gov (United States)

    Tripodi, Marina; Siano, Maria Anna; Mandato, Claudia; De Anseris, Anna Giulia Elena; Quitadamo, Paolo; Guercio Nuzio, Salvatore; Viggiano, Claudia; Fasolino, Francesco; Bellopede, Annalisa; Annunziata, Maria; Massa, Grazia; Pepe, Francesco Maria; De Chiara, Maria; Siani, Paolo; Vajro, Pietro

    2017-08-30

    The term "humanization" indicates the process by which people try to make something more human and civilized, more in line with what is believed to be the human nature. The humanization of care is an important and not yet a well-defined issue which includes a wide range of aspects related to the approach to the patient and care modalities. In pediatrics, the humanization concept is even vaguer due to the dual involvement of both the child and his/her family and by the existence of multiple proposed models. The present study aims to analyze the main existing humanization models regarding pediatric care, and the tools for assessing its grade. The main Humanization care programs have been elaborated and developed both in America (Brazil, USA) and Europe. The North American and European models specifically concern pediatric care, while the model developed in Brazil is part of a broader program aimed at all age groups. The first emphasis is on the importance of the family in child care, the second emphasis is on the child's right to be a leader, to be heard and to be able to express its opinion on the program's own care. Several tools have been created and used to evaluate humanization of care programs and related aspects. None, however, had been mutually compared. The major models of humanization care and the related assessment tools here reviewed highlight the urgent need for a more unifying approach, which may help in realizing health care programs closer to the young patient's and his/her family needs.

  19. Canards Existence in FitzHugh-Nagumo and Hodgkin-Huxley Neuronal Models

    Directory of Open Access Journals (Sweden)

    Jean-Marc Ginoux

    2015-01-01

    Full Text Available In a previous paper we have proposed a new method for proving the existence of “canard solutions” for three- and four-dimensional singularly perturbed systems with only one fast variable which improves the methods used until now. The aim of this work is to extend this method to the case of four-dimensional singularly perturbed systems with two slow and two fast variables. This method enables stating a unique generic condition for the existence of “canard solutions” for such four-dimensional singularly perturbed systems which is based on the stability of folded singularities (pseudo singular points in this case of the normalized slow dynamics deduced from a well-known property of linear algebra. This unique generic condition is identical to that provided in previous works. Application of this method to the famous coupled FitzHugh-Nagumo equations and to the Hodgkin-Huxley model enables showing the existence of “canard solutions” in such systems.

  20. Multifocality and recurrence risk: a quantitative model of field cancerization.

    Science.gov (United States)

    Foo, Jasmine; Leder, Kevin; Ryser, Marc D

    2014-08-21

    Primary tumors often emerge within genetically altered fields of premalignant cells that appear histologically normal but have a high chance of progression to malignancy. Clinical observations have suggested that these premalignant fields pose high risks for emergence of recurrent tumors if left behind after surgical removal of the primary tumor. In this work, we develop a spatio-temporal stochastic model of epithelial carcinogenesis, combining cellular dynamics with a general framework for multi-stage genetic progression to cancer. Using the model, we investigate how various properties of the premalignant fields depend on microscopic cellular properties of the tissue. In particular, we provide analytic results for the size-distribution of the histologically undetectable premalignant fields at the time of diagnosis, and investigate how the extent and the geometry of these fields depend upon key groups of parameters associated with the tissue and genetic pathways. We also derive analytical results for the relative risks of local vs. distant secondary tumors for different parameter regimes, a critical aspect for the optimal choice of post-operative therapy in carcinoma patients. This study contributes to a growing literature seeking to obtain a quantitative understanding of the spatial dynamics in cancer initiation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Quantitative phase-field modeling for boiling phenomena.

    Science.gov (United States)

    Badillo, Arnoldo

    2012-10-01

    A phase-field model is developed for quantitative simulation of bubble growth in the diffusion-controlled regime. The model accounts for phase change and surface tension effects at the liquid-vapor interface of pure substances with large property contrast. The derivation of the model follows a two-fluid approach, where the diffuse interface is assumed to have an internal microstructure, defined by a sharp interface. Despite the fact that phases within the diffuse interface are considered to have their own velocities and pressures, an averaging procedure at the atomic scale, allows for expressing all the constitutive equations in terms of mixture quantities. From the averaging procedure and asymptotic analysis of the model, nonconventional terms appear in the energy and phase-field equations to compensate for the variation of the properties across the diffuse interface. Without these new terms, no convergence towards the sharp-interface model can be attained. The asymptotic analysis also revealed a very small thermal capillary length for real fluids, such as water, that makes impossible for conventional phase-field models to capture bubble growth in the millimeter range size. For instance, important phenomena such as bubble growth and detachment from a hot surface could not be simulated due to the large number of grids points required to resolve all the scales. Since the shape of the liquid-vapor interface is primarily controlled by the effects of an isotropic surface energy (surface tension), a solution involving the elimination of the curvature from the phase-field equation is devised. The elimination of the curvature from the phase-field equation changes the length scale dominating the phase change from the thermal capillary length to the thickness of the thermal boundary layer, which is several orders of magnitude larger. A detailed analysis of the phase-field equation revealed that a split of this equation into two independent parts is possible for system sizes

  2. Quantitative property-structural relation modeling on polymeric dielectric materials

    Science.gov (United States)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  3. Quantitative phase-field modeling for wetting phenomena.

    Science.gov (United States)

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.

  4. Endoscopic skull base training using 3D printed models with pre-existing pathology.

    Science.gov (United States)

    Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes

    2015-03-01

    Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts.

  5. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  6. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  7. Testing process predictions of models of risky choice: a quantitative model comparison approach.

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called "similarity." In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  8. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    Science.gov (United States)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  9. Uncertainty in Quantitative Precipitation Estimates and Forecasts in a Hydrologic Modeling Context (Invited)

    Science.gov (United States)

    Gourley, J. J.; Kirstetter, P.; Hong, Y.; Hardy, J.; Flamig, Z.

    2013-12-01

    This study presents a methodology to account for uncertainty in radar-based rainfall rate estimation using NOAA/NSSL's Multi-Radar Multisensor (MRMS) products. The focus of the study in on flood forecasting, including flash floods, in ungauged catchments throughout the conterminous US. An error model is used to derive probability distributions of rainfall rates that explicitly accounts for rain typology and uncertainty in the reflectivity-to-rainfall relationships. This approach preserves the fine space/time sampling properties (2 min/1 km) of the radar and conditions probabilistic quantitative precipitation estimates (PQPE) on the rain rate and rainfall type. Uncertainty in rainfall amplitude is the primary factor that is accounted for in the PQPE development. Additional uncertainties due to rainfall structures, locations, and timing must be considered when using quantitative precipitation forecast (QPF) products as forcing to a hydrologic model. A new method will be presented that shows how QPF ensembles are used in a hydrologic modeling context to derive probabilistic flood forecast products. This method considers the forecast rainfall intensity and morphology superimposed on pre-existing hydrologic conditions to identify basin scales that are most at risk.

  10. Frequency domain modeling and dynamic characteristics evaluation of existing wind turbine systems

    Science.gov (United States)

    Chiang, Chih-Hung; Yu, Chih-Peng

    2016-04-01

    It is quite well accepted that frequency domain procedures are suitable for the design and dynamic analysis of wind turbine structures, especially for floating offshore wind turbines, since random wind loads and wave induced motions are most likely simulated in the frequency domain. This paper presents specific applications of an effective frequency domain scheme to the linear analysis of wind turbine structures in which a 1-D spectral element was developed based on the axially-loaded member. The solution schemes are summarized for the spectral analyses of the tower, the blades, and the combined system with selected frequency-dependent coupling effect from foundation-structure interactions. Numerical examples demonstrate that the modal frequencies obtained using spectral-element models are in good agreement with those found in the literature. A 5-element mono-pile model results in less than 0.3% deviation from an existing 160-element model. It is preliminarily concluded that the proposed scheme is relatively efficient in performing quick verification for test data obtained from the on-site vibration measurement using the microwave interferometer.

  11. ON THE SPECIFIC AREA OF INHOMOGENEOUS BOOLEAN MODELS. EXISTENCE RESULTS AND APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Elena Villa

    2011-05-01

    Full Text Available The problem of the evaluation of the so-called specific area of a random closed set, in connection with its mean boundary measure, is mentioned in the classical book by Matheron on random closed sets (Matheron, 1975, p. 50; it is still an open problem, in general. We offer here an overview of some recent results concerning the existence of the specific area of inhomogeneous Boolean models, unifying results from geometric measure theory and from stochastic geometry. A discussion of possible applications to image analysis concerning the estimation of the mean surface density of random closed sets, and, in particular, to material science concerning birth-and-growth processes, is also provided.

  12. Psychological Contract Development: An Integration of Existing Knowledge to Form a Temporal Model

    Directory of Open Access Journals (Sweden)

    Kelly Windle

    2014-07-01

    Full Text Available The psychological contract has received substantial theoretical attention over the past two decades as a popular framework within which to examine contemporary employment relationships. Previous research mostly examines breach and violation of the psychological contract and its impact on employee organization outcomes. Few studies have employed longitudinal, prospective research designs to investigate the psychological contract and as a result, psychological contract content and formation are incompletely understood. It is argued that employment relationships may be better proactively managed with greater understanding of formation and changes in the psychological contract. We examine existing psychological contract literature to identify five key factors proposed to contribute to the formation of psychological contracts. We extend the current research by integrating these factors for the first time into a temporal model of psychological contract development.

  13. Existence of several surface-reconstructed phases in a two-dimensional lattice model

    Science.gov (United States)

    Huckaby, Dale A.; Rys, Franz S.

    1992-03-01

    The zero-temperature phase diagram is rigorously obtained for a two-dimensional lattice model with four energy parameters. It is shown that the parameter space can be divided into regions, together with their boundaries, such that in each region the ground-state configurations are of one of seven different types. These types include one which is nondegenerate, four which are doubly degenerate, one which is infinitely degenerate but with no residual entropy, and one which is infinitely degenerate and has a nonzero residual entropy. The Pirogov-Sinai extension of the Peierls argument is used to establish the existence at low temperatures of four different types of ordered surface-reconstructed phases.

  14. Functional Coverage of the Human Genome by Existing Structures, Structural Genomics Targets, and Homology Models.

    Directory of Open Access Journals (Sweden)

    2005-08-01

    Full Text Available The bias in protein structure and function space resulting from experimental limitations and targeting of particular functional classes of proteins by structural biologists has long been recognized, but never continuously quantified. Using the Enzyme Commission and the Gene Ontology classifications as a reference frame, and integrating structure data from the Protein Data Bank (PDB, target sequences from the structural genomics projects, structure homology derived from the SUPERFAMILY database, and genome annotations from Ensembl and NCBI, we provide a quantified view, both at the domain and whole-protein levels, of the current and projected coverage of protein structure and function space relative to the human genome. Protein structures currently provide at least one domain that covers 37% of the functional classes identified in the genome; whole structure coverage exists for 25% of the genome. If all the structural genomics targets were solved (twice the current number of structures in the PDB, it is estimated that structures of one domain would cover 69% of the functional classes identified and complete structure coverage would be 44%. Homology models from existing experimental structures extend the 37% coverage to 56% of the genome as single domains and 25% to 31% for complete structures. Coverage from homology models is not evenly distributed by protein family, reflecting differing degrees of sequence and structure divergence within families. While these data provide coverage, conversely, they also systematically highlight functional classes of proteins for which structures should be determined. Current key functional families without structure representation are highlighted here; updated information on the "most wanted list" that should be solved is available on a weekly basis from http://function.rcsb.org:8080/pdb/function_distribution/index.html.

  15. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Maurice H. ter Beek

    2015-04-01

    Full Text Available We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLan with action rates, which specify the likelihood of exhibiting particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLan allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLan semantics based on discrete-time Markov chains. The Maude implementation of PFLan is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning and the expected average cost of products.

  16. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...

  17. Existence of solutions of a nonlinear system modelling fluid flow in porous media

    Directory of Open Access Journals (Sweden)

    dam Besenyei

    2006-12-01

    Full Text Available We investigate the existence of weak solutions for nonlinear differential equations that describe fluid flow through a porous medium. Existence is proved using the theory of monotone operators, and some examples are given.

  18. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    Science.gov (United States)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  19. Paroxetine attenuates the development and existing pain in a rat model of neurophatic pain.

    Science.gov (United States)

    Zarei, Malek; Sabetkasaei, Masoumeh; Moini Zanjani, Taraneh

    2014-01-01

    P2X4 receptor (P2X4R), a purinoceptor expressed in activated spinal microglia, plays a key role in the pathogenesis of neuropathic pain. Spinal nerve injury induces up-regulation of P2X4R on activated microglia in the spinal cord, and blockade of this receptor can reduce neuropathic pain. The present study was undertaken to determine whether paroxetine, an inhibitor of P2X4R, could attenuate allodynia and hyperalgesia in chronic constriction injury (CCI) model of neuropathic pain when used preemptively or after the sciatic nerve injury. Male Wistar rats (150-200 g, n = 6) were divided into 3 different groups: 1- CCI vehicle-treated group, 2- Sham group, and 3- CCI paroxetine-treated group. Paroxetine (10 mg/kg, i.p.) was administered 1 h before surgery and continued daily until day 14. In other part of the study, paroxetine (10 mg/kg, i.p.) was administered at day 7 post injury and continued daily until day 14. von Frey filaments for mechanical allodynia and analgesia meter for thermal hyperalgesia were used to assay pain behavior. In a preventive paradigm, paroxetine significantly attenuated both mechanical allodynia and thermal hyperalgesia (Pparoxetine on existing allodynia (Pparoxetine can attenuate pain behavior when administered before and also after sciatic nerve injury in CCI model of neuropathic pain.

  20. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    Directory of Open Access Journals (Sweden)

    Stefan Melanie I

    2010-06-01

    Full Text Available Abstract Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to

  1. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  2. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    Science.gov (United States)

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Shallow Water Wave Models with and without Singular Kernel: Existence, Uniqueness, and Similarities

    Directory of Open Access Journals (Sweden)

    Emile Franc Doungmo Goufo

    2017-01-01

    Full Text Available After the recent introduction of the Caputo-Fabrizio derivative by authors of the same names, the question was raised about an eventual comparison with the old version, namely, the Caputo derivative. Unlike Caputo derivative, the newly introduced Caputo-Fabrizio derivative has no singular kernel and the concern was about the real impact of this nonsingularity on real life nonlinear phenomena like those found in shallow water waves. In this paper, a nonlinear Sawada-Kotera equation, suitable in describing the behavior of shallow water waves, is comprehensively analyzed with both types of derivative. In the investigations, various fixed-point theories are exploited together with the concept of Piccard K-stability. We are then able to obtain the existence and uniqueness results for the models with both versions of derivatives. We conclude the analysis by performing some numerical approximations with both derivatives and graphical simulations being presented for some values of the derivative order γ. Similar behaviors are pointed out and they concur with the expected multisoliton solutions well known for the Sawada-Kotera equation. This great observation means either of both derivatives is suitable to describe the motion of shallow water waves.

  4. Existence of a Consistent Quantum Gravity Model from Minimum Microscopic Information

    Science.gov (United States)

    Mandrin, P. A.

    2014-12-01

    It is shown that a quantum gravity formulation exists on the basis of quantum number conservation, the laws of thermodynamics, unspecific interactions, and locally maximizing the ratio of resulting degrees of freedom per imposed degree of freedom of the theory. The First Law of thermodynamics is evaluated by imposing boundary conditions to the theory. These boundary conditions determine the details of the complex world structure. No explicite microscopic quantum structure is required, and thus no ambiguity arises on how to construct the model. Although no dynamical computations of quantum systems are possible on this basis, all well established physics may be recovered, and all measurable quantities may be computed. The recovery of physical laws is shown by extremizing the entropy, which means varying the action on the bulk and boundary of small volumes of curved space-time. It is sketched how Quantum Field Theory (QFT) and General Relativity (GR) are recovered with no further assumptions except for imposing the dimension of a second derivative of the metric on the gravitational field equations. The new concepts are 1. the abstract organization of statistical quantum states, allowing for the possibility of absent quantum microstructure, 2. the optimization of the locally resulting degrees of freedom per imposed degree of freedom of the theory, allowing for the reconstruction of the spacetime dimensions, 3. the reconstruction of physical and geometric quantities by means of stringent mathematical or physical justifications, 4. the fully general recovery of GR by quasi-local variation methods applied on small portions of spacetime.

  5. Combinatorial modeling of chromatin features quantitatively predicts DNA replication timing in Drosophila.

    Directory of Open Access Journals (Sweden)

    Federico Comoglio

    2014-01-01

    Full Text Available In metazoans, each cell type follows a characteristic, spatio-temporally regulated DNA replication program. Histone modifications (HMs and chromatin binding proteins (CBPs are fundamental for a faithful progression and completion of this process. However, no individual HM is strictly indispensable for origin function, suggesting that HMs may act combinatorially in analogy to the histone code hypothesis for transcriptional regulation. In contrast to gene expression however, the relationship between combinations of chromatin features and DNA replication timing has not yet been demonstrated. Here, by exploiting a comprehensive data collection consisting of 95 CBPs and HMs we investigated their combinatorial potential for the prediction of DNA replication timing in Drosophila using quantitative statistical models. We found that while combinations of CBPs exhibit moderate predictive power for replication timing, pairwise interactions between HMs lead to accurate predictions genome-wide that can be locally further improved by CBPs. Independent feature importance and model analyses led us to derive a simplified, biologically interpretable model of the relationship between chromatin landscape and replication timing reaching 80% of the full model accuracy using six model terms. Finally, we show that pairwise combinations of HMs are able to predict differential DNA replication timing across different cell types. All in all, our work provides support to the existence of combinatorial HM patterns for DNA replication and reveal cell-type independent key elements thereof, whose experimental investigation might contribute to elucidate the regulatory mode of this fundamental cellular process.

  6. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    Science.gov (United States)

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  7. Thermodynamic Modeling of a Solid Oxide Fuel Cell to Couple with an Existing Gas Turbine Engine Model

    Science.gov (United States)

    Brinson, Thomas E.; Kopasakis, George

    2004-01-01

    The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.

  8. Acquisition of 3D urban models by analysis of aerial images, digital surface models, and existing 2D building information

    Science.gov (United States)

    Haala, Norbert; Anders, Karl-Heinrich

    1997-08-01

    For a task like 3D building reconstruction, there are three main data sources carrying information which is reburied for a highly automated data acquisition. These data sources are aerial images, digital surface models (DSM), which can either be derived by stereo matching from aerial images or be directly measured by scanning laser systems, and -- at least for highly developed countries -- existing (2D) GIS information on the ground plan or usage of buildings. The way these different data sources should be utilized by a process of 3D building reconstruction depends on the distinctive characteristics of the different, partly complementary type of information they contain. Image data contains much information, but just this complexity causes enormous problems for the automatic interpretation of this data type. The GIS as a secondary data source provides information on the 2D shape, i.e. the ground plan of a building, which is very reliable, although information on the third dimension is missing and therefore has to be provided by other data sources. As the information of a DSM is restricted to surface geometry, the interpretation of this kind of data is easier compared to the interpretation of image data. Nevertheless, due to insufficient spatial resolution or quality of the DSM, optimal results can only be achieved by the combination of all data sources. Within this paper two approaches aiming on the combination of aerial images, digital surface models and existing ground plans for the reconstruction of three- dimensional building reconstructions are demonstrated.

  9. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    Science.gov (United States)

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  10. Existing and potential infection risk zones of yellow fever worldwide: a modelling analysis.

    Science.gov (United States)

    Shearer, Freya M; Longbottom, Joshua; Browne, Annie J; Pigott, David M; Brady, Oliver J; Kraemer, Moritz U G; Marinho, Fatima; Yactayo, Sergio; de Araújo, Valdelaine E M; da Nóbrega, Aglaêr A; Fullman, Nancy; Ray, Sarah E; Mosser, Jonathan F; Stanaway, Jeffrey D; Lim, Stephen S; Reiner, Robert C; Moyes, Catherine L; Hay, Simon I; Golding, Nick

    2018-03-01

    Yellow fever cases are under-reported and the exact distribution of the disease is unknown. An effective vaccine is available but more information is needed about which populations within risk zones should be targeted to implement interventions. Substantial outbreaks of yellow fever in Angola, Democratic Republic of the Congo, and Brazil, coupled with the global expansion of the range of its main urban vector, Aedes aegypti, suggest that yellow fever has the propensity to spread further internationally. The aim of this study was to estimate the disease's contemporary distribution and potential for spread into new areas to help inform optimal control and prevention strategies. We assembled 1155 geographical records of yellow fever virus infection in people from 1970 to 2016. We used a Poisson point process boosted regression tree model that explicitly incorporated environmental and biological explanatory covariates, vaccination coverage, and spatial variability in disease reporting rates to predict the relative risk of apparent yellow fever virus infection at a 5 × 5 km resolution across all risk zones (47 countries across the Americas and Africa). We also used the fitted model to predict the receptivity of areas outside at-risk zones to the introduction or reintroduction of yellow fever transmission. By use of previously published estimates of annual national case numbers, we used the model to map subnational variation in incidence of yellow fever across at-risk countries and to estimate the number of cases averted by vaccination worldwide. Substantial international and subnational spatial variation exists in relative risk and incidence of yellow fever as well as varied success of vaccination in reducing incidence in several high-risk regions, including Brazil, Cameroon, and Togo. Areas with the highest predicted average annual case numbers include large parts of Nigeria, the Democratic Republic of the Congo, and South Sudan, where vaccination coverage in 2016

  11. The existence of fertile hybrids of closely related model earthworm species, Eisenia andrei and E. fetida

    Science.gov (United States)

    Bigaj, Janusz; Osikowski, Artur; Hofman, Sebastian; Falniowski, Andrzej; Panz, Tomasz; Grzmil, Pawel; Vandenbulcke, Franck

    2018-01-01

    Lumbricid earthworms Eisenia andrei (Ea) and E. fetida (Ef) are simultaneous hermaphrodites with reciprocal insemination capable of self-fertilization while the existence of hybridization of these two species was still debatable. During the present investigation fertile hybrids of Ea and Ef were detected. Virgin specimens of Ea and Ef were laboratory crossed (Ea+Ef) and their progeny was doubly identified. 1 –identified by species-specific maternally derived haploid mitochondrial DNA sequences of the COI gene being either ‘a’ for worms hatched from Ea ova or ‘f’ for worms hatched from Ef ova. 2 –identified by the diploid maternal/paternal nuclear DNA sequences of 28s rRNA gene being either ‘AA’ for Ea, ‘FF’ for Ef, or AF/FA for their hybrids derived either from the ‘aA’ or ‘fF’ ova, respectively. Among offspring of Ea+Ef pairs in F1 generation there were mainly aAA and fFF earthworms resulted from the facilitated self-fertilization and some aAF hybrids from aA ova but none fFA hybrids from fF ova. In F2 generation resulting from aAF hybrids mated with aAA a new generations of aAA and aAF hybrids were noticed, while aAF hybrids mated with fFF gave fFF and both aAF and fFA hybrids. Hybrids intercrossed together produced plenty of cocoons but no hatchlings independently whether aAF+aAF or aAF+fFA were mated. These results indicated that Ea and Ef species, easy to maintain in laboratory and commonly used as convenient models in biomedicine and ecotoxicology, may also serve in studies on molecular basis of interspecific barriers and mechanisms of introgression and speciation. Hypothetically, their asymmetrical hybridization can be modified by some external factors. PMID:29370238

  12. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  13. Dynamics of childhood growth and obesity development and validation of a quantitative mathematical model

    Science.gov (United States)

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...

  14. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    Directory of Open Access Journals (Sweden)

    Peysson Y.

    2017-01-01

    Full Text Available The Lower Hybrid (LH wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum. Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model, for robust interpretative and predictive simulations.

  15. Impact assessment of abiotic resources in LCA: quantitative comparison of selected characterization models.

    Science.gov (United States)

    Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F

    2014-10-07

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA.

  16. Numerical modeling of flow focusing: Quantitative characterization of the flow regimes

    Science.gov (United States)

    Mamet, V.; Namy, P.; Dedulle, J.-M.

    2017-09-01

    Among droplet generation technologies, the flow focusing technique is a major process due to its control, stability, and reproducibility. In this process, one fluid (the continuous phase) interacts with another one (the dispersed phase) to create small droplets. Experimental assays in the literature on gas-liquid flow focusing have shown that different jet regimes can be obtained depending on the operating conditions. However, the underlying physical phenomena remain unclear, especially mechanical interactions between the fluids and the oscillation phenomenon of the liquid. In this paper, based on published studies, a numerical diphasic model has been developed to take into consideration the mechanical interaction between phases, using the Cahn-Hilliard method to monitor the interface. Depending on the liquid/gas inputs and the geometrical parameters, various regimes can be obtained, from a steady state regime to an unsteady one with liquid oscillation. In the dispersed phase, the model enables us to compute the evolution of fluid flow, both in space (size of the recirculation zone) and in time (period of oscillation). The transition between unsteady and stationary regimes is assessed in relation to liquid and gas dimensionless numbers, showing the existence of critical thresholds. This model successfully highlights, qualitatively and quantitatively, the influence of the geometry of the nozzle, in particular, its inner diameter.

  17. Local existence for a general model of size-dependent population dynamics

    Directory of Open Access Journals (Sweden)

    Nobuyuki Kato

    1997-01-01

    aging and birth functions having general forms. The growth rate we deal with depends not only on the size but also on time. We show the existence of a local solution and continuous dependence on the initial data, which shows the uniqueness of the solution as well.

  18. Existence and uniqueness of positive solutions for a nonlocal dispersal population model

    Directory of Open Access Journals (Sweden)

    Jian-Wen Sun

    2014-06-01

    Full Text Available In this article, we study the solutions of a nonlocal dispersal equation with a spatial weight representing competitions and aggregation. To overcome the limitations of comparison principles, we introduce new definitions of upper-lower solutions. The proof of existence and uniqueness of positive solutions is based on the method of monotone iteration sequences.

  19. Photon-tissue interaction model for quantitative assessment of biological tissues

    Science.gov (United States)

    Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann

    2014-02-01

    In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.

  20. Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology

    DEFF Research Database (Denmark)

    Schoof, Erwin; Erler, Janine

    , with genomic modifications giving rise to differential protein dynamics, ultimately resulting in disease. The exact molecular signaling networks underlying specific disease phenotypes remain elusive, as the definition thereof requires extensive analysis of not only the genomic and proteomic landscapes within...... of my PhD in an attempt to positively contribute to this fundamental challenge. The thesis is divided into four parts. In Chapter I, we introduce the complexity of cancer, and describe some underlying causes and ways to study the disease from different molecular perspectives. There is a nearly infinite...... understanding of molecular processes which are fundamental to tumorigenesis. In Article 1, we propose a novel framework for how cancer mutations can be studied by taking into account their effect at the protein network level. In Article 2, we demonstrate how global, quantitative data on phosphorylation dynamics...

  1. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.

    Science.gov (United States)

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.

  2. Overview Of Coupling Of Data, Models And Information Through The Web Using Existing Standards

    NARCIS (Netherlands)

    de Boer, G.J.; Baart, F.; Jagers, B; Becker, B.P.J.; Piasecki, M.

    2014-01-01

    Assessment of environmental status and integral safety requires combination of information from many sources, coming from either databases or increasingly via live model (scenario) simulations. Many of these models require input from one another, sometimes unidirectional, but more and more

  3. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  4. Assessment of the existing models to evaluate the shear strength contribution of externally bonded FRP reinforcements

    OpenAIRE

    Oller Ibars, Eva; KOTYNIA, RENATA; Marí Bernat, Antonio Ricardo; KASZUBSKA, Monika

    2017-01-01

    This paper presents a comparative analysis of the performance of some the existing formulations to evaluate the FRP contribution to the total shear strength of reinforced concrete beams strengthened in shear by externally bonded FRP sheets. This analysis has been performe through the use of a wide database of 275 experimental tests of rectangular RC beams distinguishing those cases with and without internal steel transverse reinforcement and the different FRP strengthening configurations.

  5. Existence Theorems for Vortices in the Aharony-Bergman-Jaferis-Maldacena Model

    Science.gov (United States)

    Han, Xiaosen; Yang, Yisong

    2015-01-01

    A series of sharp existence and uniqueness theorems are established for the multiple vortex solutions in the supersymmetric Chern-Simons-Higgs theory formalism of Aharony, Bergman, Jaferis, and Maldacena, for which the Higgs bosons and Dirac fermions lie in the bifundamental representation of the general gauge symmetry group . The governing equations are of the BPS type and derived by Kim, Kim, Kwon, and Nakajima in the mass-deformed framework labeled by a continuous parameter.

  6. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  7. A Scoping Review on Models of Integrative Medicine: What Is Known from the Existing Literature?

    Science.gov (United States)

    Lim, Eun Jin; Vardy, Janette L; Oh, Byeong Sang; Dhillon, Haryana M

    2017-01-01

    Integrative medicine (IM) has been recognized and introduced into Western healthcare systems over the past two decades. Limited information on IM models is available to guide development of an optimal healthcare service. A scoping review was carried out to evaluate IM models in the extant literature, including the distinctive features of each model, to gain an understanding of the core requirements needed to develop models of IM that best meet the needs of patients. Directed content analysis was used to classify the IM models into systems based on coding schema developed from theoretical models and to identify the key concepts of each system. From 1374 articles identified, 45 studies were included. Models were categorized as theoretical and practical and were subdivided into five main models: coexistence, cooptative, cooperative, collaborative, and patient-centered care. They were then divided into three systems-independent, dependent, and integrative-on the basis of the level of involvement of general practitioners and complementary and alternative medicine (CAM) practitioners. The theoretical coexistence and cooptative models have distinct roles for different health care professionals, whereas practical models tend to be ad hoc market-driven services, dependent on patient demand. The cooperative and collaborative models were team-based, with formalized interaction between the two medical paradigms of conventional medicine and CAM, with the practical models focusing on facilitating communication, behaviors, and relationships. The patient-centered care model recognized the philosophy of CAM and required collaboration between disciplines based around patient needs. The focus of IM models has transferred from providers to patients with the independent and integrative systems. This may require a philosophical shift for IM. Further research is required to best understand how to practice patient-centered care in IM services.

  8. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  9. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    Science.gov (United States)

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  10. The landscape of existing models for high-throughput exposure assessment

    DEFF Research Database (Denmark)

    Jolliet, O.; Fantke, Peter; Huang, L.

    2017-01-01

    for skin permeation and volatilization as competing processes and that requires a limited number of readily available physiochemical properties would be suitable for LCA and HTS purposes. Thus, the multi-pathway exposure model for chemicals in cosmetics developed by Ernstoff et al.constitutes a suitable......Models are becoming increasingly available to model near-field fate and exposure, but not all are suited for high throughput. This presentation evaluates the available models for modeling exposure to chemicals in cosmetics, cleaning products, food contact and building materials. It assesses...... in indoor air (Little et al., 2012; Liu et al., 2013), but they do not well account for SVOC sorption into indoor surfaces and absorption into human skins (Huang et al., 2017). Thus a more comprehensive simplified solution is needed for SVOCs . For personal Care Products, a mass balance model that accounts...

  11. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model

    OpenAIRE

    Winslow, Brent D.; Nam Nguyen; Venta, Kimberly E.

    2017-01-01

    Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also ut...

  12. Vertical Distribution of Suspended Sediment under Steady Flow: Existing Theories and Fractional Derivative Model

    Directory of Open Access Journals (Sweden)

    Shiqian Nie

    2017-01-01

    Full Text Available The fractional advection-diffusion equation (fADE model is a new approach to describe the vertical distribution of suspended sediment concentration in steady turbulent flow. However, the advantages and parameter definition of the fADE model in describing the sediment suspension distribution are still unclear. To address this knowledge gap, this study first reviews seven models, including the fADE model, for the vertical distribution of suspended sediment concentration in steady turbulent flow. The fADE model, among others, describes both Fickian and non-Fickian diffusive characteristics of suspended sediment, while the other six models assume that the vertical diffusion of suspended sediment follows Fick’s first law. Second, this study explores the sensitivity of the fractional index of the fADE model to the variation of particle sizes and sediment settling velocities, based on experimental data collected from the literatures. Finally, empirical formulas are developed to relate the fractional derivative order to particle size and sediment settling velocity. These formulas offer river engineers a substitutive way to estimate the fractional derivative order in the fADE model.

  13. Had the Planet Mars Not Existed: Kepler's Equant Model and Its Physical Consequences

    Science.gov (United States)

    Bracco, C.; Provost, J.P.

    2009-01-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal…

  14. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  15. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    Science.gov (United States)

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...

  17. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model

    Directory of Open Access Journals (Sweden)

    Brent D. Winslow

    2017-04-01

    Full Text Available Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  18. Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models

    Science.gov (United States)

    de Diesbach, Pablo Brice

    This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.

  19. GIS based model interfacing : incorporating existing software and new techniques into a streamlined interface package

    Science.gov (United States)

    2000-01-01

    The ability to visualize data has grown immensely as the speed and functionality of Geographic Information Systems (GIS) have increased. Now, with modeling software and GIS, planners are able to view a prediction of the future traffic demands in thei...

  20. Existence and Uniqueness of Positive and Bounded Solutions of a Discrete Population Model with Fractional Dynamics

    Directory of Open Access Journals (Sweden)

    J. E. Macías-Díaz

    2017-01-01

    Full Text Available We depart from the well-known one-dimensional Fisher’s equation from population dynamics and consider an extension of this model using Riesz fractional derivatives in space. Positive and bounded initial-boundary data are imposed on a closed and bounded domain, and a fully discrete form of this fractional initial-boundary-value problem is provided next using fractional centered differences. The fully discrete population model is implicit and linear, so a convenient vector representation is readily derived. Under suitable conditions, the matrix representing the implicit problem is an inverse-positive matrix. Using this fact, we establish that the discrete population model is capable of preserving the positivity and the boundedness of the discrete initial-boundary conditions. Moreover, the computational solubility of the discrete model is tackled in the closing remarks.

  1. First principles pharmacokinetic modeling: A quantitative study on Cyclosporin

    DEFF Research Database (Denmark)

    Mošat', Andrej; Lueshen, Eric; Heitzig, Martina

    2013-01-01

    renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...

  2. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    than random forecast for both the days. The RMSE of the optimized model has also been found smaller than the persistence forecast and standard deviation for both the days. 1. Introduction. The Himalayan region, during winter is prone to severe weather due to large amount of snowfall. The snowfall occurs during ...

  3. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for

  4. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...

  5. Quantitative phase-field model of alloy solidification

    Science.gov (United States)

    Echebarria, Blas; Folch, Roger; Karma, Alain; Plapp, Mathis

    2004-12-01

    We present a detailed derivation and thin interface analysis of a phase-field model that can accurately simulate microstructural pattern formation for low-speed directional solidification of a dilute binary alloy. This advance with respect to previous phase-field models is achieved by the addition of a phenomenological “antitrapping” solute current in the mass conservation relation [A. Karma, Phys. Rev. Lett. 87, 115701 (2001)]. This antitrapping current counterbalances the physical, albeit artificially large, solute trapping effect generated when a mesoscopic interface thickness is used to simulate the interface evolution on experimental length and time scales. Furthermore, it provides additional freedom in the model to suppress other spurious effects that scale with this thickness when the diffusivity is unequal in solid and liquid [R. F. Almgren, SIAM J. Appl. Math. 59, 2086 (1999)], which include surface diffusion and a curvature correction to the Stefan condition. This freedom can also be exploited to make the kinetic undercooling of the interface arbitrarily small even for mesoscopic values of both the interface thickness and the phase-field relaxation time, as for the solidification of pure melts [A. Karma and W.-J. Rappel, Phys. Rev. E 53, R3017 (1996)]. The performance of the model is demonstrated by calculating accurately within a phase-field approach the Mullins-Sekerka stability spectrum of a planar interface and nonlinear cellular shapes for realistic alloy parameters and growth conditions.

  6. A quantitative risk model for early lifecycle decision making

    Science.gov (United States)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  7. A quantitative magnetospheric model derived from spacecraft magnetometer data

    Science.gov (United States)

    Mead, G. D.; Fairfield, D. H.

    1975-01-01

    The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.

  8. Comparison of existing models to simulate anaerobic digestion of lipid-rich waste.

    Science.gov (United States)

    Béline, F; Rodriguez-Mendez, R; Girault, R; Bihan, Y Le; Lessard, P

    2017-02-01

    Models for anaerobic digestion of lipid-rich waste taking inhibition into account were reviewed and, if necessary, adjusted to the ADM1 model framework in order to compare them. Experimental data from anaerobic digestion of slaughterhouse waste at an organic loading rate (OLR) ranging from 0.3 to 1.9kgVSm(-3)d(-1) were used to compare and evaluate models. Experimental data obtained at low OLRs were accurately modeled whatever the model thereby validating the stoichiometric parameters used and influent fractionation. However, at higher OLRs, although inhibition parameters were optimized to reduce differences between experimental and simulated data, no model was able to accurately simulate accumulation of substrates and intermediates, mainly due to the wrong simulation of pH. A simulation using pH based on experimental data showed that acetogenesis and methanogenesis were the most sensitive steps to LCFA inhibition and enabled identification of the inhibition parameters of both steps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Quantitative phase-field model for phase transformations in multi-component alloys

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Abhik Narayan

    2013-08-01

    Phase-field modeling has spread to a variety of applications involving phase transformations. While the method has wide applicability, derivation of quantitative predictions requires deeper understanding of the coupling between the system and model parameters. The present work highlights a novel phase-field model based on a grand-potential formalism allowing for an elegant and efficient solution to the problems in phase transformations. In particular, applications involving single and multi-phase, multi-component solidification have been investigated and a thorough study into the quantitative modeling of these problems have been examined.

  10. Quantitative properties of clustering within modern microscopic nuclear models

    Energy Technology Data Exchange (ETDEWEB)

    Volya, A. [Florida State University (United States); Tchuvil’sky, Yu. M., E-mail: tchuvl@nucl-th.sinp.msu.ru [Moscow State University, Skobelstsyn Institute of Nuclear Physics (Russian Federation)

    2016-09-15

    A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially the possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.

  11. 3D Numerical Modeling of the Propagation of Hydraulic Fracture at Its Intersection with Natural (Pre-existing) Fracture

    Science.gov (United States)

    Dehghan, Ali Naghi; Goshtasbi, Kamran; Ahangari, Kaveh; Jin, Yan; Bahmani, Aram

    2017-02-01

    A variety of 3D numerical models were developed based on hydraulic fracture experiments to simulate the propagation of hydraulic fracture at its intersection with natural (pre-existing) fracture. Since the interaction between hydraulic and pre-existing fractures is a key condition that causes complex fracture patterns, the extended finite element method was employed in ABAQUS software to simulate the problem. The propagation of hydraulic fracture in a fractured medium was modeled in two horizontal differential stresses (Δ σ) of 5e6 and 10e6 Pa considering different strike and dip angles of pre-existing fracture. The rate of energy release was calculated in the directions of hydraulic and pre-existing fractures (G_{{frac}} /G_{{rock}}) at their intersection point to determine the fracture behavior. Opening and crossing were two dominant fracture behaviors during the hydraulic and pre-existing fracture interaction at low and high differential stress conditions, respectively. The results of numerical studies were compared with those of experimental models, showing a good agreement between the two to validate the accuracy of the models. Besides the horizontal differential stress, strike and dip angles of the natural (pre-existing) fracture, the key finding of this research was the significant effect of the energy release rate on the propagation behavior of the hydraulic fracture. This effect was more prominent under the influence of strike and dip angles, as well as differential stress. The obtained results can be used to predict and interpret the generation of complex hydraulic fracture patterns in field conditions.

  12. Quantitative Risk Modeling of Fire on the International Space Station

    Science.gov (United States)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  13. Afference copy as a quantitative neurophysiological model for consciousness.

    Science.gov (United States)

    Cornelis, Hugo; Coop, Allan D

    2014-06-01

    Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications.

  14. Influence of f(R) models on the existence of anisotropic self-gravitating systems

    Energy Technology Data Exchange (ETDEWEB)

    Yousaf, Z.; Sharif, M.; Bhatti, M.Z. [University of the Punjab, Department of Mathematics, Lahore (Pakistan); Ilyas, M. [University of the Punjab, Centre for High Energy Physics, Lahore (Pakistan)

    2017-10-15

    This paper aims to explore some realistic configurations of anisotropic spherical structures in the background of metric f(R) gravity, where R is the Ricci scalar. The solutions obtained by Krori and Barua are used to examine the nature of particular compact stars with three different modified gravity models. The behavior of material variables is analyzed through plots and the physical viability of compact stars is investigated through energy conditions. We also discuss the behavior of different forces, equation of state parameter, measure of anisotropy and Tolman-Oppenheimer-Volkoff equation in the modeling of stellar structures. The comparison from our graphical representations may provide evidence for the realistic and viable f(R) gravity models at both theoretical and the astrophysical scale. (orig.)

  15. Spectral composition of light sources and insect phototaxis, with an evaluation of existing spectral response models. Journal of Insect Conservation

    NARCIS (Netherlands)

    Grunsven, van R.H.A.; Donners, M.; Boekee, K.; Tichelaar, I.; Geffen, van K.G.; Groenendijk, D.; Berendse, F.; Veenendaal, E.M.

    2014-01-01

    Artificial illumination attracts insects, but to what extent light attracts insects, depends on the spectral composition of the light. Response models have been developed to predict the attractiveness of artificial light sources. In this study we compared attraction of insects by existing light

  16. Existence of global solutions for reaction diffusion systems modeling the electrodeposition of alloys with initial data measures

    Directory of Open Access Journals (Sweden)

    Nour Eddine Alaa

    2017-01-01

    Full Text Available In this work, we are interested in the mathematical model of reaction diffusion systems. The originality of our study is to work with concentrations appearing in reactors together with measure initial data. To validate this model, we prove the existence of global weak solutions. The "j" technique introduced by Pierre and Martin [18] is suitable for this type of solutions. However, its adaptation has some new technical difficulties that we have to overcome.

  17. Benthic-Pelagic Coupling in Biogeochemical and Climate Models: Existing Approaches, Recent developments and Roadblocks

    Science.gov (United States)

    Arndt, Sandra

    2016-04-01

    Marine sediments are key components in the Earth System. They host the largest carbon reservoir on Earth, provide the only long term sink for atmospheric CO2, recycle nutrients and represent the most important climate archive. Biogeochemical processes in marine sediments are thus essential for our understanding of the global biogeochemical cycles and climate. They are first and foremost, donor controlled and, thus, driven by the rain of particulate material from the euphotic zone and influenced by the overlying bottom water. Geochemical species may undergo several recycling loops (e.g. authigenic mineral precipitation/dissolution) before they are either buried or diffuse back to the water column. The tightly coupled and complex pelagic and benthic process interplay thus delays recycling flux, significantly modifies the depositional signal and controls the long-term removal of carbon from the ocean-atmosphere system. Despite the importance of this mutual interaction, coupled regional/global biogeochemical models and (paleo)climate models, which are designed to assess and quantify the transformations and fluxes of carbon and nutrients and evaluate their response to past and future perturbations of the climate system either completely neglect marine sediments or incorporate a highly simplified representation of benthic processes. On the other end of the spectrum, coupled, multi-component state-of-the-art early diagenetic models have been successfully developed and applied over the past decades to reproduce observations and quantify sediment-water exchange fluxes, but cannot easily be coupled to pelagic models. The primary constraint here is the high computation cost of simulating all of the essential redox and equilibrium reactions within marine sediments that control carbon burial and benthic recycling fluxes: a barrier that is easily exacerbated if a variety of benthic environments are to be spatially resolved. This presentation provides an integrative overview of

  18. The Existence and Stability Analysis of the Equilibria in Dengue Disease Infection Model

    Science.gov (United States)

    Anggriani, N.; Supriatna, A. K.; Soewono, E.

    2015-06-01

    In this paper we formulate an SIR (Susceptible - Infective - Recovered) model of Dengue fever transmission with constant recruitment. We found a threshold parameter K0, known as the Basic Reproduction Number (BRN). This model has two equilibria, disease-free equilibrium and endemic equilibrium. By constructing suitable Lyapunov function, we show that the disease- free equilibrium is globally asymptotic stable whenever BRN is less than one and when it is greater than one, the endemic equilibrium is globally asymptotic stable. Numerical result shows the dynamic of each compartment together with effect of multiple bio-agent intervention as a control to the dengue transmission.

  19. Comparison of Existing Responsiveness-to-Intervention Models to Identify and Answer Implementation Questions

    Science.gov (United States)

    Burns, Matthew K.; Ysseldyke, James E.

    2005-01-01

    Responsiveness-to-intervention (RTI) is the front-running candidate to replace current practice in diagnosing learning disabilities, but researchers have identified several questions about implementation. Specific questions include: Are there validated intervention models? Are there adequately trained personnel? What leadership is needed? When…

  20. Fatigue assessment of an existing steel bridge by finite element modelling and field measurements

    Science.gov (United States)

    Kwad, J.; Alencar, G.; Correia, J.; Jesus, A.; Calçada, R.; Kripakaran, P.

    2017-05-01

    The evaluation of fatigue life of structural details in metallic bridges is a major challenge for bridge engineers. A reliable and cost-effective approach is essential to ensure appropriate maintenance and management of these structures. Typically, local stresses predicted by a finite element model of the bridge are employed to assess the fatigue life of fatigue-prone details. This paper illustrates an approach for fatigue assessment based on measured data for a connection in an old bascule steel bridge located in Exeter (UK). A finite element model is first developed from the design information. The finite element model of the bridge is calibrated using measured responses from an ambient vibration test. The stress time histories are calculated through dynamic analysis of the updated finite element model. Stress cycles are computed through the rainflow counting algorithm, and the fatigue prone details are evaluated using the standard SN curves approach and the Miner’s rule. Results show that the proposed approach can estimate the fatigue damage of a fatigue prone detail in a structure using measured strain data.

  1. Three-Dimensional Model Test Study on the Existing Caisson Breakwater at Port of Castellon, Spain

    DEFF Research Database (Denmark)

    Nørgaard, Jørgen Harck; Andersen, Thomas Lykke

    This report present the results of 3-D physical model tests (length scale 1:60) carried out in a wave basin at Department of Civil Engineering, Aalborg University (AAU) on behalf of the client; BP OIL ESPAÑA. Associate Prof. Thomas Lykke Andersen and M.Sc. Jørgen Quvang Harck Nørgaard were in cha...

  2. Concentric Coplanar Capacitive Sensor System with Quantitative Model

    Science.gov (United States)

    Bowler, Nicola (Inventor); Chen, Tianming (Inventor)

    2014-01-01

    A concentric coplanar capacitive sensor includes a charged central disc forming a first electrode, an outer annular ring coplanar with and outer to the charged central disc, the outer annular ring forming a second electrode, and a gap between the charged central disc and the outer annular ring. The first electrode and the second electrode may be attached to an insulative film. A method provides for determining transcapacitance between the first electrode and the second electrode and using the transcapacitance in a model that accounts for a dielectric test piece to determine inversely the properties of the dielectric test piece.

  3. Quantitative description of realistic wealth distributions by kinetic trading models

    Science.gov (United States)

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.

  4. A two-locus model of spatially varying stabilizing or directional selection on a quantitative trait.

    Science.gov (United States)

    Geroldinger, Ludwig; Bürger, Reinhard

    2014-06-01

    The consequences of spatially varying, stabilizing or directional selection on a quantitative trait in a subdivided population are studied. A deterministic two-locus two-deme model is employed to explore the effects of migration, the degree of divergent selection, and the genetic architecture, i.e., the recombination rate and ratio of locus effects, on the maintenance of genetic variation. The possible equilibrium configurations are determined as functions of the migration rate. They depend crucially on the strength of divergent selection and the genetic architecture. The maximum migration rates are investigated below which a stable fully polymorphic equilibrium or a stable single-locus polymorphism can exist. Under stabilizing selection, but with different optima in the demes, strong recombination may facilitate the maintenance of polymorphism. However usually, and in particular with directional selection in opposite direction, the critical migration rates are maximized by a concentrated genetic architecture, i.e., by a major locus and a tightly linked minor one. Thus, complementing previous work on the evolution of genetic architectures in subdivided populations subject to diversifying selection, it is shown that concentrated architectures may aid the maintenance of polymorphism. Conditions are obtained when this is the case. Finally, the dependence of the phenotypic variance, linkage disequilibrium, and various measures of local adaptation and differentiation on the parameters is elaborated. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Accounting for genetic interactions improves modeling of individual quantitative trait phenotypes in yeast

    OpenAIRE

    Forsberg, Simon K. G.; Bloom, Joshua S.; Sadhu, Meru J.; Kruglyak, Leonid; Carlborg, ?rjan

    2017-01-01

    Experiments in model organisms report abundant genetic interactions underlying biologically important traits, whereas quantitative genetics theory predicts, and data support, that most genetic variance in populations is additive. Here we describe networks of capacitating genetic interactions that contribute to quantitative trait variation in a large yeast intercross population. The additive variance explained by individual loci in a network is highly dependent on the allele frequencies of the...

  6. Non-existence of Steady State Equilibrium in the Neoclassical Growth Model with a Longevity Trend

    DEFF Research Database (Denmark)

    Hermansen, Mikkel Nørlem

    Longevity has been increasing in the developed countries for almost two centuries and further increases are expected in the future. In the neoclassical growth models the case of population growth driven by fertility is well-known, whereas the properties of population growth caused by persistently...... declining mortality rates have received little attention. Furthermore, the economic literature on the consequences of changing longevity has relied almost entirely on analysis applying a once and for all change in the survival probability. This paper raises concern about such an approach of comparison...... of steady state equilibrium when considering the empirically observed trend in longevity. We extend a standard continuous time overlapping generations model by a longevity trend and are thereby able to study the properties of mortality-driven population growth. This turns out to be exceedingly complicated...

  7. The introspective may achieve more: Enhancing existing Geoscientific models with native-language emulated structural reflection

    Science.gov (United States)

    Ji, Xinye; Shen, Chaopeng

    2018-01-01

    Geoscientific models manage myriad and increasingly complex data structures as trans-disciplinary models are integrated. They often incur significant redundancy with cross-cutting tasks. Reflection, the ability of a program to inspect and modify its structure and behavior at runtime, is known as a powerful tool to improve code reusability, abstraction, and separation of concerns. Reflection is rarely adopted in high-performance Geoscientific models, especially with Fortran, where it was previously deemed implausible. Practical constraints of language and legacy often limit us to feather-weight, native-language solutions. We demonstrate the usefulness of a structural-reflection-emulating, dynamically-linked metaObjects, gd. We show real-world examples including data structure self-assembly, effortless input/output (IO) and upgrade to parallel I/O, recursive actions and batch operations. We share gd and a derived module that reproduces MATLAB-like structure in Fortran and C++. We suggest that both a gd representation and a Fortran-native representation are maintained to access the data, each for separate purposes. Embracing emulated reflection allows generically-written codes that are highly re-usable across projects.

  8. Bovine meat versus pork in Toxoplasma gondii transmission in Italy: A quantitative risk assessment model.

    Science.gov (United States)

    Belluco, Simone; Patuzzi, Ilaria; Ricci, Antonia

    2018-01-03

    Toxoplasma gondii is a widespread zoonotic parasite with a high seroprevalence in the human population and the ability to infect almost all warm blooded animals. Humans can acquire toxoplasmosis from different transmission routes and food plays a critical role. Within the food category, meat is of utmost importance, as it may contain bradyzoites inside tissue cysts, which can potentially cause infection after ingestion if parasites are not inactivated through freezing or cooking before consumption. In Italy, the most commonly consumed meat-producing animal species are bovines and pigs. However, T. gondii prevalence and consumption habits for meat of these animal species are very different. There is debate within the scientific community concerning which of these animal species is the main source of meat-derived human toxoplasmosis. The aim of this work was to build a quantitative risk assessment model to estimate the yearly probability of acquiring toxoplasmosis infection due to consumption of bovine meat and pork (excluding cured products) in Italy, taking into account the different eating habits. The model was fitted with data obtained from the literature regarding: bradyzoite concentrations, portion size, dose-response relation, prevalence of T. gondii in bovines and swine, meat consumption and meat preparation habits. Alternative handling scenarios were considered. The model estimated the risk per year of acquiring T. gondii infection in Italy from bovine and swine meat to be 0.034% and 0.019%, respectively. Results suggest that, due to existing eating habits, bovine meat can be a not negligible source of toxoplasmosis in Italy. Copyright © 2017. Published by Elsevier B.V.

  9. Quantitative Diagnostics of Mixing in a Shallow Water Model of the Stratosphere.

    Science.gov (United States)

    Sobel, Adam H.; Plumb, R. Alan

    1999-08-01

    Two different approaches are applied to quantify mixing in a shallow water model of the stratosphere. These are modified Lagrangian mean (MLM) theory and a technique referred to as `reverse domain filling with local gradient reversal' (RDF-LGR). The latter is similar to a previously existing technique using contour advection and contour surgery.It is first proved that in an inviscid shallow water atmosphere subject to mass sources and sinks, if the mass enclosed by a potential vorticity (PV) contour is steady in time, then the integral of the mass source over the area enclosed by the contour must be zero. Next, the MLM and RDF-LGR approaches are used to diagnose the time-averaged transport across PV contours in the model simulations.The model includes a sixth-order hyperdiffusion on the vorticity field. Except in a thin outer `entrainment zone,' the hyperdiffusion term has only a very weak effect on the MLM mass budget of the polar vortex. In the entrainment zone, the hyperdiffusion term has a significant effect. The RDF-LGR results capture this behavior, providing good quantitative estimates of the hyperdiffusion term, which is equivalent to the degree of radiative disequilibrium at a PV contour. This agreement shows that the main role of the hyperdiffusion is to `mop up' the filaments that are produced by the essentially inviscid large-scale dynamics. All calculations are repeated for two values of the hyperdiffusion coefficient that differ by a factor of 50, with little difference in the results. This suggests that the amount of material entrained from the vortex edge into the surf zone does not depend on the details of the small-scale dissipation, as long as it is sufficiently weak and has some degree of scale selectivity.

  10. A quantitative and dynamic model for plant stem cell regulation.

    Directory of Open Access Journals (Sweden)

    Florian Geier

    Full Text Available Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.

  11. Analysis of magnetic relaxation with pre-existing nucleation sites based on the Fatuzzo-Labrune model

    DEFF Research Database (Denmark)

    Quach, D.; Handoko, D.; Lee, S.

    2015-01-01

    Time-resolved magnetic domain patterns of (Co/Pt) and (CoFeB/Pd) multilayers with perpendicular magnetic anisotropy are observed by means of magneto-optical microscopy, from which magnetic relaxation curves are determined for. Interestingly, it has been observed that the relaxation processes...... not only from the saturated state, but also with pre-existing domains, are well explained based on the Fatuzzo-Labrune model [1, 2]. Full details of the relaxation behavior and subsequent microscopic domain patterns evolving from the pre-existing nucleation sites originated from the sub-structured magnetic...... domains are discussed....

  12. Rapid energy modeling for existing buildings: Testing the business and environmental potential through an experiment at Autodesk

    Energy Technology Data Exchange (ETDEWEB)

    Deodhar, Aniruddha; Stewart, Emma; Young, Rahul; Khan, Haider

    2010-09-15

    Retrofits of existing buildings represent a huge, growing market and an opportunity to achieve some of the most sizable and cost-effective carbon reductions in any sector of the economy. More 'zero energy' and 'carbon neutral' buildings are being conceived daily by combining energy efficiency measures with renewable energy technologies. However, for all the progress, the building industry faces technical and cost challenges in identifying the highest potential retrofit candidates. This presentation investigates one potential solution, a technology driven workflow called rapid energy modeling, to accelerate and scale the process of analyzing performance for existing buildings in prioritizing improvements.

  13. Numerical Acoustic Models Including Viscous and Thermal losses: Review of Existing and New Methods

    DEFF Research Database (Denmark)

    Andersen, Peter Risby; Cutanda Henriquez, Vicente; Aage, Niels

    2017-01-01

    This work presents an updated overview of numerical methods including acoustic viscous and thermal losses. Numerical modelling of viscothermal losses has gradually become more important due to the general trend of making acoustic devices smaller. Not including viscothermal acoustic losses...... in such numerical computations will therefore lead to inaccurate or even wrong results. Both, Finite Element Method (FEM) and Boundary Element Method (BEM), formulations are available that incorporate these loss mechanisms. Including viscothermal losses in FEM computations can be computationally very demanding, due...... and BEM method including viscothermal dissipation are compared and investigated....

  14. A quantitative model of population pressure and its potential use in development planning.

    Science.gov (United States)

    Soemarwoto, O

    1985-12-01

    An attempt is made to develop a quantitative model of the concept of population pressure, using the example of population pressure on land resources in agricultural societies. "The model shows that environmental quality is tied to population growth and that population pressure does not bear relationship with population density." The implications of the findings for development planning are considered. (summary in IND) excerpt

  15. Quantitative hardware prediction modeling for hardware/software co-design

    NARCIS (Netherlands)

    Meeuws, R.J.

    2012-01-01

    Hardware estimation is an important factor in Hardware/Software Co-design. In this dissertation, we present the Quipu Modeling Approach, a high-level quantitative prediction model for HW/SW Partitioning using statistical methods. Our approach uses linear regression between software complexity

  16. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  17. Corequisite Model: An Effective Strategy for Remediation in Freshmen Level Quantitative Reasoning Course

    Science.gov (United States)

    Kashyap, Upasana; Mathew, Santhosh

    2017-01-01

    The purpose of this study was to compare students' performances in a freshmen level quantitative reasoning course (QR) under three different instructional models. A cohort of 155 freshmen students was placed in one of the three models: needing a prerequisite course, corequisite (students enroll simultaneously in QR course and a course that…

  18. On the Existence of a Weak Solution of a Half-Cell Model for PEM Fuel Cells

    Directory of Open Access Journals (Sweden)

    Shuh-Jye Chern

    2010-01-01

    Full Text Available A nonlinear boundary value problem (BVP from the modelling of the transport phenomena in the cathode catalyst layer of a one-dimensional half-cell single-phase model for proton exchange membrane (PEM fuel cells, derived from the 3D model of Zhou and Liu (2000, 2001, is studied. It is a BVP for a system of three coupled ordinary differential equations of second order. Schauder's fixed point theorem is applied to show the existence of a solution in the Sobolev space 1.

  19. Optimality and some of its discontents: successes and shortcomings of existing models for binary decisions.

    Science.gov (United States)

    Holmes, Philip; Cohen, Jonathan D

    2014-04-01

    We review how leaky competing accumulators (LCAs) can be used to model decision making in two-alternative, forced-choice tasks, and we show how they reduce to drift diffusion (DD) processes in special cases. As continuum limits of the sequential probability ratio test, DD processes are optimal in producing decisions of specified accuracy in the shortest possible time. Furthermore, the DD model can be used to derive a speed-accuracy trade-off that optimizes reward rate for a restricted class of two alternative forced-choice decision tasks. We review findings that compare human performance with this benchmark, and we reveal both approximations to and deviations from optimality. We then discuss three potential sources of deviations from optimality at the psychological level--avoidance of errors, poor time estimation, and minimization of the cost of control--and review recent theoretical and empirical findings that address these possibilities. We also discuss the role of cognitive control in changing environments and in modulating exploitation and exploration. Finally, we consider physiological factors in which nonlinear dynamics may also contribute to deviations from optimality. Copyright © 2014 Cognitive Science Society, Inc.

  20. Influence of stone content on soil hydraulic properties: experimental investigation and test of existing model concepts

    Science.gov (United States)

    Naseri, Mahyar; Richter, Niels; Iden, Sascha C.; Durner, Wolfgang

    2017-04-01

    Rock fragments in soil, in this contribution referred to as "stones", play an important role for water flow in the subsurface. To successfully model soil hydraulic processes such as evaporation, redistribution and drainage, an understanding of how stones affect soil hydraulic properties (SHP) is crucial. Past investigations on the role of stones in soil have focused on their influence on the water retention curve (WRC) and on saturated hydraulic conductivity Ks, and have led to some simple theoretical models for the influence of stones on effective SHP. However, studies that measure unsaturated SHP directly, i.e., simultaneously the WRC and hydraulic conductivity curve (HCC) are still missing. Also, studies so far were restricted to low or moderate stone contents of less than 40%. We conducted a laboratory study in which we examined the effect of stone content on effective WRC and HCC of stony soils. Mixtures of soil and stones were generated by substituting background soil with stones in weight fractions between 0% (fine material only) to 100% (pure gravel). Stone sizes were 2-5 mm and 7-15 mm, respectively, and background soils were Sand and Sandy Loam. Packed samples were fully saturated under vacuum and subsequently subjected to evaporation in the laboratory. All experiments were done in three replicates. The soil hydraulic properties were determined by the simplified evaporation method using the UMS HYPROP setup. Questions were whether the applied measurement methodology is applicable to derive the SHP of the mixtures and how the gradual increase of stone content will affect the SHP, particularly the HCC. The applied methodology was successful in identifying effective SHP with a high precision over the full moisture range. WRC and HCC were successfully obtained by HYPROP, even for pure gravel with a size of 7-15 mm. WRCs changed qualitatively in the expected manner, i.e., an increase of stone content reduced porosity and soil water content at all suctions

  1. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  2. Post-hoc pattern-oriented testing and tuning of an existing large model: lessons from the field vole.

    Directory of Open Access Journals (Sweden)

    Christopher J Topping

    Full Text Available Pattern-oriented modeling (POM is a general strategy for modeling complex systems. In POM, multiple patterns observed at different scales and hierarchical levels are used to optimize model structure, to test and select sub-models of key processes, and for calibration. So far, POM has been used for developing new models and for models of low to moderate complexity. It remains unclear, though, whether the basic idea of POM to utilize multiple patterns, could also be used to test and possibly develop existing and established models of high complexity. Here, we use POM to test, calibrate, and further develop an existing agent-based model of the field vole (Microtus agrestis, which was developed and tested within the ALMaSS framework. This framework is complex because it includes a high-resolution representation of the landscape and its dynamics, of the individual's behavior, and of the interaction between landscape and individual behavior. Results of fitting to the range of patterns chosen were generally very good, but the procedure required to achieve this was long and complicated. To obtain good correspondence between model and the real world it was often necessary to model the real world environment closely. We therefore conclude that post-hoc POM is a useful and viable way to test a highly complex simulation model, but also warn against the dangers of over-fitting to real world patterns that lack details in their explanatory driving factors. To overcome some of these obstacles we suggest the adoption of open-science and open-source approaches to ecological simulation modeling.

  3. The effects of the overline running model of the high-speed trains on the existing lines

    Science.gov (United States)

    Qian, Yong-Sheng; Zeng, Jun-Wei; Zhang, Xiao-Long; Wang, Jia-Yuan; Lv, Ting-Ting

    2016-09-01

    This paper studies the effect on the existing railway which is made by the train with 216 km/h high-speed when running across over the existing railway. The influence on the railway carrying capacity which is made by the transportation organization mode of the existing railway is analyzed under different parking modes of high-speed trains as well. In order to further study the departure intervals of the train, the average speed and the delay of the train, an automata model under these four-aspects is established. The results of the research in this paper could serve as the theoretical references to the newly built high-speed railways.

  4. Existence of a metallic phase in a 1D Holstein-Hubbard model at half filling

    Energy Technology Data Exchange (ETDEWEB)

    Krishna, Phani Murali [School of Physics, University of Hyderabad, Hyderabad 500046 (India); Chatterjee, Ashok [Department of Physics, Bilkent University, 06800 Bilkent, Ankara (Turkey)]. E-mail: ashok@fen.bilkent.edu.tr

    2007-06-15

    The one-dimensional half-filled Holstein-Hubbard model is studied using a series of canonical transformations including phonon coherence effect that partly depends on the electron density and is partly independent and also incorporating the on-site and the nearest-neighbour phonon correlations and the exact Bethe-ansatz solution of Lieb and Wu. It is shown that choosing a better variational phonon state makes the polarons more mobile and widens the intermediate metallic region at the charge-density-wave-spin-density-wave crossover recently predicted by Takada and Chatterjee. The presence of this metallic phase is indeed a favourable situation from the point of view of high temperature superconductivity.

  5. Accelerated aging exacerbates a pre-existing pathology in a tau transgenic mouse model.

    Science.gov (United States)

    Bodea, Liviu-Gabriel; Evans, Harrison Tudor; Van der Jeugd, Ann; Ittner, Lars M; Delerue, Fabien; Kril, Jillian; Halliday, Glenda; Hodges, John; Kiernan, Mathew C; Götz, Jürgen

    2017-04-01

    Age is a critical factor in the prevalence of tauopathies, including Alzheimer's disease. To observe how an aging phenotype interacts with and affects the pathological intracellular accumulation of hyperphosphorylated tau, the tauopathy mouse model pR5 (expressing P301L mutant human tau) was back-crossed more than ten times onto a senescence-accelerated SAMP8 background to establish the new strain, SApT. Unlike SAMP8 mice, pR5 mice are characterized by a robust tau pathology particularly in the amygdala and hippocampus. Analysis of age-matched SApT mice revealed that pathological tau phosphorylation was increased in these brain regions compared to those in the parental pR5 strain. Moreover, as revealed by immunohistochemistry, phosphorylation of critical tau phospho-epitopes (P-Ser202/P-Ser205 and P-Ser235) was significantly increased in the amygdala of SApT mice in an age-dependent manner, suggesting an age-associated effect of tau phosphorylation. Anxiety tests revealed that the older cohort of SApT mice (10 months vs. 8 months) exhibited a behavioural pattern similar to that observed for age-matched tau transgenic pR5 mice and not the SAMP8 parental mice. Learning and memory, however, appeared to be governed by the accelerated aging background of the SAMP8 strain, as at both ages investigated, SAMP8 and SApT mice showed a decreased learning capacity compared to pR5 mice. We therefore conclude that accelerated aging exacerbates pathological tau phosphorylation, leading to changes in normal behaviour. These findings further suggest that SApT mice may be a useful novel model in which to study the role of a complex geriatric phenotype in tauopathy. © 2017 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.

  6. Quantitative plant resistance in cultivar mixtures: wheat yellow rust as a modeling case study.

    Science.gov (United States)

    Sapoukhina, Natalia; Paillard, Sophie; Dedryver, Françoise; de Vallavieille-Pope, Claude

    2013-11-01

    Unlike qualitative plant resistance, which confers immunity to disease, quantitative resistance confers only a reduction in disease severity and this can be nonspecific. Consequently, the outcome of its deployment in cultivar mixtures is not easy to predict, as on the one hand it may reduce the heterogeneity of the mixture, but on the other it may induce competition between nonspecialized strains of the pathogen. To clarify the principles for the successful use of quantitative plant resistance in disease management, we built a parsimonious model describing the dynamics of competing pathogen strains spreading through a mixture of cultivars carrying nonspecific quantitative resistance. Using the parameterized model for a wheat-yellow rust system, we demonstrate that a more effective use of quantitative resistance in mixtures involves reinforcing the effect of the highly resistant cultivars rather than replacing them. We highlight the fact that the judicious deployment of the quantitative resistance in two- or three-component mixtures makes it possible to reduce disease severity using only small proportions of the highly resistant cultivar. Our results provide insights into the effects on pathogen dynamics of deploying quantitative plant resistance, and can provide guidance for choosing appropriate associations of cultivars and optimizing diversification strategies. © 2013 INRA. New Phytologist © 2013 New Phytologist Trust.

  7. Sustainable deployment of QTLs conferring quantitative resistance to crops: first lessons from a stochastic model.

    Science.gov (United States)

    Bourget, Romain; Chaumont, Loïc; Durel, Charles-Eric; Sapoukhina, Natalia

    2015-05-01

    Quantitative plant disease resistance is believed to be more durable than qualitative resistance, since it exerts less selective pressure on the pathogens. However, the process of progressive pathogen adaptation to quantitative resistance is poorly understood, which makes it difficult to predict its durability or to derive principles for its sustainable deployment. Here, we study the dynamics of pathogen adaptation in response to quantitative plant resistance affecting pathogen reproduction rate and its colonizing capacity. We developed a stochastic model for the continuous evolution of a pathogen population within a quantitatively resistant host. We assumed that pathogen can adapt to a host by the progressive restoration of reproduction rate or of colonizing capacity, or of both. Our model suggests that a combination of quantitative trait loci (QTLs) affecting distinct pathogen traits was more durable if the evolution of repressed traits was antagonistic. Otherwise, quantitative resistance that depressed only pathogen reproduction was more durable. In order to decelerate the progressive pathogen adaptation, QTLs that decrease the pathogen's maximum capacity to colonize must be combined with QTLs that decrease the spore production per lesion or the infection efficiency or that increase the latent period. Our theoretical framework can help breeders to develop principles for sustainable deployment of QTLs. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  8. Modeling approaches for qualitative and semi-quantitative analysis of cellular signaling networks.

    Science.gov (United States)

    Samaga, Regina; Klamt, Steffen

    2013-06-26

    A central goal of systems biology is the construction of predictive models of bio-molecular networks. Cellular networks of moderate size have been modeled successfully in a quantitative way based on differential equations. However, in large-scale networks, knowledge of mechanistic details and kinetic parameters is often too limited to allow for the set-up of predictive quantitative models.Here, we review methodologies for qualitative and semi-quantitative modeling of cellular signal transduction networks. In particular, we focus on three different but related formalisms facilitating modeling of signaling processes with different levels of detail: interaction graphs, logical/Boolean networks, and logic-based ordinary differential equations (ODEs). Albeit the simplest models possible, interaction graphs allow the identification of important network properties such as signaling paths, feedback loops, or global interdependencies. Logical or Boolean models can be derived from interaction graphs by constraining the logical combination of edges. Logical models can be used to study the basic input-output behavior of the system under investigation and to analyze its qualitative dynamic properties by discrete simulations. They also provide a suitable framework to identify proper intervention strategies enforcing or repressing certain behaviors. Finally, as a third formalism, Boolean networks can be transformed into logic-based ODEs enabling studies on essential quantitative and dynamic features of a signaling network, where time and states are continuous.We describe and illustrate key methods and applications of the different modeling formalisms and discuss their relationships. In particular, as one important aspect for model reuse, we will show how these three modeling approaches can be combined to a modeling pipeline (or model hierarchy) allowing one to start with the simplest representation of a signaling network (interaction graph), which can later be refined to logical

  9. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior

    KAUST Repository

    Markowich, Peter

    2010-06-01

    We study the system ct + u · ∇c = ∇c -nf(c) nt + u · ∇n = ∇n m - ∇ · (n×(c) ∇c) ut + u·∇u + ∇P - η∇u + n∇φ/ = 0 ∇·u = 0. arising in the modelling of the motion of swimming bacteria under the effect of diffusion, oxygen-taxis and transport through an incompressible fluid. The novelty with respect to previous papers in the literature lies in the presence of nonlinear porous-medium-like diffusion in the equation for the density n of the bacteria, motivated by a finite size effect. We prove that, under the constraint m ε (3/2, 2] for the adiabatic exponent, such system features global in time solutions in two space dimensions for large data. Moreover, in the case m = 2 we prove that solutions converge to constant states in the large-time limit. The proofs rely on standard energy methods and on a basic entropy estimate which cannot be achieved in the case m = 1. The case m = 2 is very special as we can provide a Lyapounov functional. We generalize our results to the three-dimensional case and obtain a smaller range of exponents m ε (m*, 2] with m* > 3/2, due to the use of classical Sobolev inequalities.

  10. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum.

    Science.gov (United States)

    Milic, Natasa M; Trajkovic, Goran Z; Bukumiric, Zoran M; Cirkovic, Andja; Nikolic, Ivan M; Milin, Jelena S; Milic, Nikola V; Savic, Marko D; Corac, Aleksandar M; Marinkovic, Jelena M; Stanisavljevic, Dejana M

    2016-01-01

    Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (plearning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training in medical statistics.

  11. Modelling the existing Irish energy-system to identify future energy costs and the maximum wind penetration feasible

    DEFF Research Database (Denmark)

    Connolly, D.; Lund, Henrik; Mathiesen, Brian Vad

    2010-01-01

    energy- system to future energy costs by considering future fuel prices, CO2 prices, and different interest rates. The final investigation identifies the maximum wind penetration feasible on the 2007 Irish energy- system from a technical and economic perspective, as wind is the most promising fluctuating...... renewable resource available in Ireland. It is concluded that the reference model simulates the Irish energy-system accurately, the annual fuel costs for Ireland’s energy could increase by approximately 58% from 2007 to 2020 if a business-as-usual scenario is followed, and the optimum wind penetration...... for the existing Irish energy-system is approximately 30% from both a technical and economic perspective based on 2020 energy prices. Future studies will use the model developed in this study to show that higher wind penetrations can be achieved if the existing energy-system is modified correctly. Finally...

  12. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum.

    Directory of Open Access Journals (Sweden)

    Natasa M Milic

    Full Text Available Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face learning to further assess the potential value of web-based learning in medical statistics.This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545 the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course.Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001 and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023 with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001.This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional

  13. Evaluation of Modeled and Measured Energy Savings in Existing All Electric Public Housing in the Pacific Northwest

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, A.; Lubliner, M.; Howard, L.; Kunkle, R.; Salzberg, E.

    2014-04-01

    This project analyzes the cost effectiveness of energy savings measures installed by a large public housing authority in Salishan, a community in Tacoma Washington. Research focuses on the modeled and measured energy usage of the first six phases of construction, and compares the energy usage of those phases to phase 7. Market-ready energy solutions were also evaluated to improve the efficiency of affordable housing for new and existing (built since 2001) affordable housing in the marine climate of Washington State.

  14. Quantitative structure-interplanar spacing models based on montmorillonite modified with quaternary alkylammonium salts

    Science.gov (United States)

    Grigorev, V. Yu.; Grigoreva, L. D.; Salimov, I. E.

    2017-08-01

    Models of the quantitative structure-property relationship (QSPR) between the structure of 19 alkylammonium cations and the basal distances ( d 001) of Na+ montmorillonite modified with these cations are created. Seven descriptors characterizing intermolecular interaction, including new fractal descriptors, are used to describe the structure of the compounds. It is shown that equations obtained via multiple linear regression have good statistical characteristics, and the calculated d 001 values agree with the results from experimental studies. The quantitative contribution from hydrogen bonds to the formation of interplanar spacing in Na+ montmorillonite is found by analyzing the QSPR models.

  15. The Power of a Good Idea: Quantitative Modeling of the Spread of Ideas from Epidemiological Models

    Energy Technology Data Exchange (ETDEWEB)

    Bettencourt, L. M. A. (LANL); Cintron-Arias, A. (Cornell University); Kaiser, D. I. (MIT); Castillo-Chavez, C. (Arizona State University)

    2005-05-05

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.

  16. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    Science.gov (United States)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  17. Existence and regularity of solutions of a phase field model for solidification with convection of pure materials in two dimensions

    Directory of Open Access Journals (Sweden)

    Jose Luiz Boldrini

    2003-11-01

    Full Text Available We study the existence and regularity of weak solutions of a phase field type model for pure material solidification in presence of natural convection. We assume that the non-stationary solidification process occurs in a two dimensional bounded domain. The governing equations of the model are the phase field equation coupled with a nonlinear heat equation and a modified Navier-Stokes equation. These equations include buoyancy forces modelled by Boussinesq approximation and a Carman-Koseny term to model the flow in mushy regions. Since these modified Navier-Stokes equations only hold in the non-solid regions, which are not known a priori, we have a free boundary-value problem.

  18. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    Science.gov (United States)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2016-07-01

    coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.

  19. Testing the influence of vertical, pre-existing joints on normal faulting using analogue and 3D discrete element models (DEM)

    Science.gov (United States)

    Kettermann, Michael; von Hagke, Christoph; Virgo, Simon; Urai, Janos L.

    2015-04-01

    Brittle rocks are often affected by different generations of fractures that influence each other. We study pre-existing vertical joints followed by a faulting event. Understanding the effect of these interactions on fracture/fault geometries as well as the development of dilatancy and the formation of cavities as potential fluid pathways is crucial for reservoir quality prediction and production. Our approach combines scaled analogue and numerical modeling. Using cohesive hemihydrate powder allows us to create open fractures prior to faulting. The physical models are reproduced using the ESyS-Particle discrete element Modeling Software (DEM), and different parameters are investigated. Analogue models were carried out in a manually driven deformation box (30x28x20 cm) with a 60° dipping pre-defined basement fault and 4.5 cm of displacement. To produce open joints prior to faulting, sheets of paper were mounted in the box to a depth of 5 cm at a spacing of 2.5 cm. Powder was then sieved into the box, embedding the paper almost entirely (column height of 19 cm), and the paper was removed. We tested the influence of different angles between the strike of the basement fault and the joint set (0°, 4°, 8°, 12°, 16°, 20°, and 25°). During deformation we captured structural information by time-lapse photography that allows particle imaging velocimetry analyses (PIV) to detect localized deformation at every increment of displacement. Post-mortem photogrammetry preserves the final 3-dimensional structure of the fault zone. We observe that no faults or fractures occur parallel to basement-fault strike. Secondary fractures are mostly oriented normal to primary joints. At the final stage of the experiments we analyzed semi-quantitatively the number of connected joints, number of secondary fractures, degree of segmentation (i.e. number of joints accommodating strain), damage zone width, and the map-view area fraction of open gaps. Whereas the area fraction does not change

  20. Quantitative modeling of Escherichia coli chemotactic motion in environments varying in space and time.

    Directory of Open Access Journals (Sweden)

    Lili Jiang

    2010-04-01

    Full Text Available Escherichia coli chemotactic motion in spatiotemporally varying environments is studied by using a computational model based on a coarse-grained description of the intracellular signaling pathway dynamics. We find that the cell's chemotaxis drift velocity v(d is a constant in an exponential attractant concentration gradient [L] proportional, variantexp(Gx. v(d depends linearly on the exponential gradient G before it saturates when G is larger than a critical value G(C. We find that G(C is determined by the intracellular adaptation rate k(R with a simple scaling law: G(C infinity k(1/2(R. The linear dependence of v(d on G = d(ln[L]/dx directly demonstrates E. coli's ability in sensing the derivative of the logarithmic attractant concentration. The existence of the limiting gradient G(C and its scaling with k(R are explained by the underlying intracellular adaptation dynamics and the flagellar motor response characteristics. For individual cells, we find that the overall average run length in an exponential gradient is longer than that in a homogeneous environment, which is caused by the constant kinase activity shift (decrease. The forward runs (up the gradient are longer than the backward runs, as expected; and depending on the exact gradient, the (shorter backward runs can be comparable to runs in a spatially homogeneous environment, consistent with previous experiments. In (spatial ligand gradients that also vary in time, the chemotaxis motion is damped as the frequency omega of the time-varying spatial gradient becomes faster than a critical value omega(c, which is controlled by the cell's chemotaxis adaptation rate k(R. Finally, our model, with no adjustable parameters, agrees quantitatively with the classical capillary assay experiments where the attractant concentration changes both in space and time. Our model can thus be used to study E. coli chemotaxis behavior in arbitrary spatiotemporally varying environments. Further experiments are

  1. Flood protection effect of the existing and projected reservoirs in the Amur River basin: evaluation by the hydrological modeling system

    Directory of Open Access Journals (Sweden)

    Y. Motovilov

    2015-06-01

    Full Text Available Hydrological modeling system was developed as a tool addressed supporting flood risk management by the existing and projected reservoirs in the Amur River basin. The system includes the physically-based semi-distributed model of runoff generation ECOMAG coupled with a hydrodynamic MIKE-11 model to simulate channel flow in the main river. The case study was carried out for the middle part of the Amur River where large reservoirs are located on the Zeya and Bureya Rivers. The models were calibrated and validated using streamflow measuruments at the different gauges of the main river and its tributaries. Numerical experiments were carried out to assess the effect of the existing Zeya and Bureya reservoirs regulation on 850 km stretch of the middle Amur River stage. It was shown that in the absence of the reservoirs, the water levels downstream of the Zeya and Bureya Rivers would be 0.5–1.5 m higher than the levels measured during the disastrous flood of 2013. Similar experiments were carried out to assess possible flood protection effect of new projected reservoirs on the Zeya and Bureya Rivers.

  2. Flood protection effect of the existing and projected reservoirs in the Amur River basin: evaluation by the hydrological modeling system

    Science.gov (United States)

    Motovilov, Y.; Danilov-Danilyan, V.; Dod, E.; Kalugin, A.

    2015-06-01

    Hydrological modeling system was developed as a tool addressed supporting flood risk management by the existing and projected reservoirs in the Amur River basin. The system includes the physically-based semi-distributed model of runoff generation ECOMAG coupled with a hydrodynamic MIKE-11 model to simulate channel flow in the main river. The case study was carried out for the middle part of the Amur River where large reservoirs are located on the Zeya and Bureya Rivers. The models were calibrated and validated using streamflow measuruments at the different gauges of the main river and its tributaries. Numerical experiments were carried out to assess the effect of the existing Zeya and Bureya reservoirs regulation on 850 km stretch of the middle Amur River stage. It was shown that in the absence of the reservoirs, the water levels downstream of the Zeya and Bureya Rivers would be 0.5-1.5 m higher than the levels measured during the disastrous flood of 2013. Similar experiments were carried out to assess possible flood protection effect of new projected reservoirs on the Zeya and Bureya Rivers.

  3. The role of pre-existing disturbances in the effect of marine reserves on coastal ecosystems: a modelling approach.

    Directory of Open Access Journals (Sweden)

    Marie Savina

    Full Text Available We have used an end-to-end ecosystem model to explore responses over 30 years to coastal no-take reserves covering up to 6% of the fifty thousand square kilometres of continental shelf and slope off the coast of New South Wales (Australia. The model is based on the Atlantis framework, which includes a deterministic, spatially resolved three-dimensional biophysical model that tracks nutrient flows through key biological groups, as well as extraction by a range of fisheries. The model results support previous empirical studies in finding clear benefits of reserves to top predators such as sharks and rays throughout the region, while also showing how many of their major prey groups (including commercial species experienced significant declines. It was found that the net impact of marine reserves was dependent on the pre-existing levels of disturbance (i.e. fishing pressure, and to a lesser extent on the size of the marine reserves. The high fishing scenario resulted in a strongly perturbed system, where the introduction of marine reserves had clear and mostly direct effects on biomass and functional biodiversity. However, under the lower fishing pressure scenario, the introduction of marine reserves caused both direct positive effects, mainly on shark groups, and indirect negative effects through trophic cascades. Our study illustrates the need to carefully align the design and implementation of marine reserves with policy and management objectives. Trade-offs may exist not only between fisheries and conservation objectives, but also among conservation objectives.

  4. Supporting the Constructive Use of Existing Hydrological Models in Participatory Settings: a Set of "Rules of the Game"

    Directory of Open Access Journals (Sweden)

    Pieter W. G. Bots

    2011-06-01

    Full Text Available When hydrological models are used in support of water management decisions, stakeholders often contest these models because they perceive certain aspects to be inadequately addressed. A strongly contested model may be abandoned completely, even when stakeholders could potentially agree on the validity of part of the information it can produce. The development of a new model is costly, and the results may be contested again. We consider how existing hydrological models can be used in a policy process so as to benefit from both hydrological knowledge and the perspectives and local knowledge of stakeholders. We define a code of conduct as a set of "rules of the game" that we base on a case study of developing a water management plan for a Natura 2000 site in the Netherlands. We propose general rules for agenda management and information sharing, and more specific rules for model use and option development. These rules structure the interactions among actors, help them to explicitly acknowledge uncertainties, and prevent expertise from being neglected or overlooked. We designed the rules to favor openness, protection of core stakeholder values, the use of relevant substantive knowledge, and the momentum of the process. We expect that these rules, although developed on the basis of a water-management issue, can also be applied to support the use of existing computer models in other policy domains. As rules will shape actions only when they are constantly affirmed by actors, we expect that the rules will become less useful in an "unruly" social environment where stakeholders constantly challenge the proceedings.

  5. Influence of weathering and pre-existing large scale fractures on gravitational slope failure: insights from 3-D physical modelling

    Directory of Open Access Journals (Sweden)

    D. Bachmann

    2004-01-01

    Full Text Available Using a new 3-D physical modelling technique we investigated the initiation and evolution of large scale landslides in presence of pre-existing large scale fractures and taking into account the slope material weakening due to the alteration/weathering. The modelling technique is based on the specially developed properly scaled analogue materials, as well as on the original vertical accelerator device enabling increases in the 'gravity acceleration' up to a factor 50. The weathering primarily affects the uppermost layers through the water circulation. We simulated the effect of this process by making models of two parts. The shallower one represents the zone subject to homogeneous weathering and is made of low strength material of compressive strength σl. The deeper (core part of the model is stronger and simulates intact rocks. Deformation of such a model subjected to the gravity force occurred only in its upper (low strength layer. In another set of experiments, low strength (σw narrow planar zones sub-parallel to the slope surface (σwl were introduced into the model's superficial low strength layer to simulate localized highly weathered zones. In this configuration landslides were initiated much easier (at lower 'gravity force', were shallower and had smaller horizontal size largely defined by the weak zone size. Pre-existing fractures were introduced into the model by cutting it along a given plan. They have proved to be of small influence on the slope stability, except when they were associated to highly weathered zones. In this latter case the fractures laterally limited the slides. Deep seated rockslides initiation is thus directly defined by the mechanical structure of the hillslope's uppermost levels and especially by the presence of the weak zones due to the weathering. The large scale fractures play a more passive role and can only influence the shape and the volume of the sliding units.

  6. From Tls Point Clouds to 3d Models of Trees: a Comparison of Existing Algorithms for 3d Tree Reconstruction

    Science.gov (United States)

    Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.

    2017-02-01

    3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  7. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    Two basic approaches to quantitative non-monotonic modeling of economic uncertainty are available today and have been applied to a number of real world uncertainty problems, such as investment analyses and budgeting of large infra structure projects. This paper further contributes to the understa...

  8. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases

    NARCIS (Netherlands)

    T.D. Hollingsworth (T. Déirdre); E.R. Adams (Emily R.); R.M. Anderson (Roy); K. Atkins (Katherine); S. Bartsch (Sarah); M-G. Basáñez (María-Gloria); M. Behrend (Matthew); D.J. Blok (David); L.A.C. Chapman (Lloyd A. C.); L.E. Coffeng (Luc); O. Courtenay (Orin); R.E. Crump (Ron E.); S.J. de Vlas (Sake); A.P. Dobson (Andrew); L. Dyson (Louise); H. Farkas (Hajnal); A.P. Galvani (Alison P.); M. Gambhir (Manoj); D. Gurarie (David); M.A. Irvine (Michael A.); S. Jervis (Sarah); M.J. Keeling (Matt J.); L. Kelly-Hope (Louise); C. King (Charles); B.Y. Lee (Bruce Y.); E.A. le Rutte (Epke); T.M. Lietman (Thomas M.); M. Ndeffo-Mbah (Martial); G.F. Medley (Graham F.); E. Michael (Edwin); A. Pandey (Abhishek); J.K. Peterson (Jennifer K.); A. Pinsent (Amy); T.C. Porco (Travis C.); J.H. Richardus (Jan Hendrik); L. Reimer (Lisa); K.S. Rock (Kat S.); B.K. Singh (Brajendra K.); W.A. Stolk (Wilma); S. Swaminathan (Subramanian); S.J. Torr (Steve J.); J. Townsend (Jeffrey); J. Truscott (James); M. Walker (Martin); A. Zoueva (Alexandra)

    2015-01-01

    textabstractQuantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an

  9. Experiment selection for the discrimination of semi-quantitative models of dynamical systems

    NARCIS (Netherlands)

    Vatcheva, [No Value; de Jong, H; Bernard, O; Mars, NJI

    Modeling an experimental system often results in a number of alternative models that are all justified by the available experimental data. To discriminate among these models, additional experiments are needed. Existing methods for the selection of discriminatory experiments in statistics and in

  10. Growth and inactivation models to be used in quantitative risk assessments.

    Science.gov (United States)

    van Gerwen, S J; Zwietering, M H

    1998-11-01

    In past years many models describing growth and inactivation of microorganisms have been developed. This study is a discussion of the growth and inactivation models that can be used in a stepwise procedure for quantitative risk assessment. First, rough risk assessments are performed in which orders of magnitude for microbial processes are estimated by the use of simple models. This method provides an efficient way to find the main determinants of risk. Second, the main determinants of risk are studied more accurately and quantitatively. It is best to compare several models at this level, as no model is expected to be able accurately to predict microbial responses under all circumstances. By comparing various models the main determinants of risk are studied from several points of view, and risks can be assessed on a broad basis. If, however, process variations have a more profound effect on risk than the differences between models, it is most efficient to use the simplest model available. If relevant, the process variations can be stochastically described in the third level of detail. Stochastic description of the process parameters will however not change the conclusion on the usefulness of simple models in quantitative risk assessments. The proposed stepwise procedure that starts simply before going into detail provides a structured method of risk assessment and prevents the researcher from getting caught in too much complexity. This simplicity is necessary because of the complex nature of food safety. The principal aspects are highlighted during the procedure and many factors can be omitted since their quantitative effect is negligible.

  11. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    Science.gov (United States)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  12. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  13. Comparison and Extension of Existing 3D Propagation Models with Real-World Effects Based on Ray-tracing

    DEFF Research Database (Denmark)

    Kifle, Dereje W.; Gimenez, Lucas Chavarria; Wegmann, Bernhard

    2014-01-01

    , such kind of automated and flexible network operation require a Self Organizing Network algorithm based on network performance parameters being partly derived from the radio measurements. Appropriate radio propagation models are not only needed for network planning tools but also for simulative lab tests...... of the developed Self Organizing Network algorithm controlling the flexible deployment changes enabled by Active Antenna Systems. In this paper, an extension of the existing 3D propagation model is proposed in order to incorporate the the propagation condition variation effects, not considered so far, by changing...... antenna beam orientation like antenna tilting or when users are distributed in the third dimension (height) in multi-floor scenarios. Ray tracing based generated propagation maps that show the realistic propagation effect are used as 3D real world reference for investigation and model approval....

  14. Model-driven engineering of RNA devices to quantitatively program gene expression.

    Science.gov (United States)

    Carothers, James M; Goler, Jonathan A; Juminaga, Darmawi; Keasling, Jay D

    2011-12-23

    The models and simulation tools available to design functionally complex synthetic biological devices are very limited. We formulated a design-driven approach that used mechanistic modeling and kinetic RNA folding simulations to engineer RNA-regulated genetic devices that control gene expression. Ribozyme and metabolite-controlled, aptazyme-regulated expression devices with quantitatively predictable functions were assembled from components characterized in vitro, in vivo, and in silico. The models and design strategy were verified by constructing 28 Escherichia coli expression devices that gave excellent quantitative agreement between the predicted and measured gene expression levels (r = 0.94). These technologies were applied to engineer RNA-regulated controls in metabolic pathways. More broadly, we provide a framework for studying RNA functions and illustrate the potential for the use of biochemical and biophysical modeling to develop biological design methods.

  15. Quantitative research on the EFQM excellence model: A systematic literature review (1991–2015

    Directory of Open Access Journals (Sweden)

    Eva Suárez

    2017-09-01

    Full Text Available The purpose of the paper is to present the state of the art in quantitative research on the EFQM model that will guide future research lines in this field. For this, a systematic literature review from the period 1991–2015 is carried out in impact journals belonging to the Journal Citation Reports (JCR and SCImago Journal & Country Rank (SJR. Finally, 53 papers were selected and aspects related to the purpose, nature and instruments of data collection, type of quantitative analysis employed, sector under study and main conclusions and contributions are analysed. As a result, the study presents more than a dozen lines of future research.

  16. Models and methods for quantitative analysis of surface-enhanced Raman spectra.

    Science.gov (United States)

    Li, Shuo; Nyagilo, James O; Dave, Digant P; Gao, Jean

    2014-03-01

    The quantitative analysis of surface-enhanced Raman spectra using scattering nanoparticles has shown the potential and promising applications in in vivo molecular imaging. The diverse approaches have been used for quantitative analysis of Raman pectra information, which can be categorized as direct classical least squares models, full spectrum multivariate calibration models, selected multivariate calibration models, and latent variable regression (LVR) models. However, the working principle of these methods in the Raman spectra application remains poorly understood and a clear picture of the overall performance of each model is missing. Based on the characteristics of the Raman spectra, in this paper, we first provide the theoretical foundation of the aforementioned commonly used models and show why the LVR models are more suitable for quantitative analysis of the Raman spectra. Then, we demonstrate the fundamental connections and differences between different LVR methods, such as principal component regression, reduced-rank regression, partial least square regression (PLSR), canonical correlation regression, and robust canonical analysis, by comparing their objective functions and constraints.We further prove that PLSR is literally a blend of multivariate calibration and feature extraction model that relates concentrations of nanotags to spectrum intensity. These features (a.k.a. latent variables) satisfy two purposes: the best representation of the predictor matrix and correlation with the response matrix. These illustrations give a new understanding of the traditional PLSR and explain why PLSR exceeds other methods in quantitative analysis of the Raman spectra problem. In the end, all the methods are tested on the Raman spectra datasets with different evaluation criteria to evaluate their performance.

  17. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    Science.gov (United States)

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  18. Bias Reduction in Estimating Variance Components of Phytoplankton Existence at Na Thap River Based on Logistics Linear Mixed Models

    Science.gov (United States)

    Arisanti, R.; Notodiputro, K. A.; Sadik, K.; Lim, A.

    2017-03-01

    There are two approaches in estimating variance components, i.e. linearity and integral approaches. However the estimates of variance components produced by both methods are known to be biased. Firth (1993) has introduced parameter estimation for correcting the bias of the maximum likelihood estimates. This method is within the class of linear models, especially the Restricted Maximum Likelihood (REML) method, and the resulting estimator is known as the Firth estimator. In this paper we discuss the bias correction method applied to a logistic linear mixed model in analyzing the existence of Synedra phytoplankton along Na Thap river in Thailand. The Firth adjusted Maximum Likelihood Estimation (MLE) is similar to REML but it shows the characteristic of generalized linear mixed model. We evaluated the Firth adjustment method by means of simulations and the result showed that the unadjusted MLE produced 95% confidence intervals which were narrower when compare to the Firth method. However, the probability coverage of the interval for unadjusted MLE was lower than 95%, whereas for the Firth method the probability coverage is approximately 95%. These results were also consistent with the variance estimation of the Synedra phytoplankton existence. It was shown that the variance estimates of Firth adjusted MLE was lower than the unadjusted MLE.

  19. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  20. Existence and Uniqueness of Positive Periodic Solutions for a Delayed Predator-Prey Model with Dispersion and Impulses

    Directory of Open Access Journals (Sweden)

    Zhenguo Luo

    2014-01-01

    Full Text Available An impulsive Lotka-Volterra type predator-prey model with prey dispersal in two-patch environments and time delays is investigated, where we assume the model of patches with a barrier only as far as the prey population is concerned, whereas the predator population has no barriers between patches. By applying the continuation theorem of coincidence degree theory and by means of a suitable Lyapunov functional, a set of easily verifiable sufficient conditions are obtained to guarantee the existence, uniqueness, and global stability of positive periodic solutions of the system. Some known results subject to the underlying systems without impulses are improved and generalized. As an application, we also give two examples to illustrate the feasibility of our main results.

  1. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2017-08-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  2. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings

    Directory of Open Access Journals (Sweden)

    Hélène Macher

    2017-10-01

    Full Text Available The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM (Building Information Model process remains currently largely a manual process which is time consuming and error-prone. In this paper, a semi-automatic approach is presented for the 3D reconstruction of indoors of existing buildings from point clouds. Several segmentations are performed so that point clouds corresponding to grounds, ceilings and walls are extracted. Based on these point clouds, walls and slabs of buildings are reconstructed and described in the IFC format in order to be integrated into BIM software. The assessment of the approach is proposed thanks to two datasets. The evaluation items are the degree of automation, the transferability of the approach and the geometric quality of results of the 3D reconstruction. Additionally, quality indexes are introduced to inspect the results in order to be able to detect potential errors of reconstruction.

  3. Global existence and asymptotic behavior of a model for biological control of invasive species via supermale introduction

    KAUST Repository

    Parshad, Rana

    2013-01-01

    The purpose of this manuscript is to propose a model for the biological control of invasive species, via introduction of phenotypically modified organisms into a target population. We are inspired by the earlier Trojan Y Chromosome model [J.B. Gutierrez, J.L. Teem, J. Theo. Bio., 241(22), 333-341, 2006]. However, in the current work, we remove the assumption of logisticgrowth rate, and do not consider the addition of sex-reversed supermales. Also the constant birth and death coefficients, considered earlier, are replaced by functionally dependent ones. In this case the nonlinearities present serious difficulties since they change sign, and the components of the solution are not a priori bounded, in some Lp-space for p large, to permit theapplication of the well known regularizing effect principle. Thus functional methods to deducethe global existence in time, for the system in question, are not applicable. Our techniques are based on the Lyapunov functional method. We prove global existence of solutions, as well asexistence of a finite dimensional global attractor, that supports states of extinction. Our analytical finding are in accordance with numerical simulations, which we also present. © 2013 International Press.

  4. A quantitative model of optimal data selection in Wason's selection task.

    Science.gov (United States)

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  5. Quantitative 3D investigation of Neuronal network in mouse spinal cord model.

    Science.gov (United States)

    Bukreeva, I; Campi, G; Fratini, M; Spanò, R; Bucci, D; Battaglia, G; Giove, F; Bravin, A; Uccelli, A; Venturi, C; Mastrogiacomo, M; Cedola, A

    2017-01-23

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a "database" for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  6. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    Science.gov (United States)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  7. Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H.E.; Schober, H.; Gonzalez, M.A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F.J.; Fayos, R.; Dawidowski, J. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M.A.; Vieira, S. [Universidad Autonoma de Madrid (Spain)

    1997-04-01

    The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.

  8. Modelling Activities In Kinematics Understanding quantitative relations with the contribution of qualitative reasoning

    Science.gov (United States)

    Orfanos, Stelios

    2010-01-01

    In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them

  9. Operational Efficiency Forecasting Model of an Existing Underground Mine Using Grey System Theory and Stochastic Diffusion Processes

    Directory of Open Access Journals (Sweden)

    Svetlana Strbac Savic

    2015-01-01

    Full Text Available Forecasting the operational efficiency of an existing underground mine plays an important role in strategic planning of production. Degree of Operating Leverage (DOL is used to express the operational efficiency of production. The forecasting model should be able to involve common time horizon, taking the characteristics of the input variables that directly affect the value of DOL. Changes in the magnitude of any input variable change the value of DOL. To establish the relationship describing the way of changing we applied multivariable grey modeling. Established time sequence multivariable response formula is also used to forecast the future values of operating leverage. Operational efficiency of production is often associated with diverse sources of uncertainties. Incorporation of these uncertainties into multivariable forecasting model enables mining company to survive in today’s competitive environment. Simulation of mean reversion process and geometric Brownian motion is used to describe the stochastic diffusion nature of metal price, as a key element of revenues, and production costs, respectively. By simulating a forecasting model, we imitate its action in order to measure its response to different inputs. The final result of simulation process is the expected value of DOL for every year of defined time horizon.

  10. QSAR DataBank repository: open and linked qualitative and quantitative structure-activity relationship models.

    Science.gov (United States)

    Ruusmann, V; Sild, S; Maran, U

    2015-01-01

    Structure-activity relationship models have been used to gain insight into chemical and physical processes in biomedicine, toxicology, biotechnology, etc. for almost a century. They have been recognized as valuable tools in decision support workflows for qualitative and quantitative predictions. The main obstacle preventing broader adoption of quantitative structure-activity relationships [(Q)SARs] is that published models are still relatively difficult to discover, retrieve and redeploy in a modern computer-oriented environment. This publication describes a digital repository that makes in silico (Q)SAR-type descriptive and predictive models archivable, citable and usable in a novel way for most common research and applied science purposes. The QSAR DataBank (QsarDB) repository aims to make the processes and outcomes of in silico modelling work transparent, reproducible and accessible. Briefly, the models are represented in the QsarDB data format and stored in a content-aware repository (a.k.a. smart repository). Content awareness has two dimensions. First, models are organized into collections and then into collection hierarchies based on their metadata. Second, the repository is not only an environment for browsing and downloading models (the QDB archive) but also offers integrated services, such as model analysis and visualization and prediction making. The QsarDB repository unlocks the potential of descriptive and predictive in silico (Q)SAR-type models by allowing new and different types of collaboration between model developers and model users. The key enabling factor is the representation of (Q)SAR models in the QsarDB data format, which makes it easy to preserve and share all relevant data, information and knowledge. Model developers can become more productive by effectively reusing prior art. Model users can make more confident decisions by relying on supporting information that is larger and more diverse than before. Furthermore, the smart repository

  11. Accounting for genetic interactions improves modeling of individual quantitative trait phenotypes in yeast.

    Science.gov (United States)

    Forsberg, Simon K G; Bloom, Joshua S; Sadhu, Meru J; Kruglyak, Leonid; Carlborg, Örjan

    2017-04-01

    Experiments in model organisms report abundant genetic interactions underlying biologically important traits, whereas quantitative genetics theory predicts, and data support, the notion that most genetic variance in populations is additive. Here we describe networks of capacitating genetic interactions that contribute to quantitative trait variation in a large yeast intercross population. The additive variance explained by individual loci in a network is highly dependent on the allele frequencies of the interacting loci. Modeling of phenotypes for multilocus genotype classes in the epistatic networks is often improved by accounting for the interactions. We discuss the implications of these results for attempts to dissect genetic architectures and to predict individual phenotypes and long-term responses to selection.

  12. From classical genetics to quantitative genetics to systems biology: modeling epistasis.

    Directory of Open Access Journals (Sweden)

    David L Aylor

    2008-03-01

    Full Text Available Gene expression data has been used in lieu of phenotype in both classical and quantitative genetic settings. These two disciplines have separate approaches to measuring and interpreting epistasis, which is the interaction between alleles at different loci. We propose a framework for estimating and interpreting epistasis from a classical experiment that combines the strengths of each approach. A regression analysis step accommodates the quantitative nature of expression measurements by estimating the effect of gene deletions plus any interaction. Effects are selected by significance such that a reduced model describes each expression trait. We show how the resulting models correspond to specific hierarchical relationships between two regulator genes and a target gene. These relationships are the basic units of genetic pathways and genomic system diagrams. Our approach can be extended to analyze data from a variety of experiments, multiple loci, and multiple environments.

  13. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    Science.gov (United States)

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  14. The optimal hyperspectral quantitative models for chlorophyll-a of chlorella vulgaris

    Science.gov (United States)

    Cheng, Qian; Wu, Xiuju

    2009-09-01

    Chlorophyll-a of Chlorella vulgaris had been related with spectrum. Based on hyperspectral measurement for Chlorella vulgaris, the hyperspectral characteristics of Chlorella vulgaris and their optimal hyperspectral quantitative models of chlorophyll-a (Chla) estimation were researched in situ experiment. The results showed that the optimal hyperspectral quantitative model of Chlorella vulgaris was Chla=180.5+1125787(R700)'+2.4 *109[(R700)']2 (P0Chlorella vulgaris, two reflectance crests were around 540 nm and 700 nm and their locations moved right while Chl-a concentration increased. The reflectance of Chlorella vulgaris decreases with Cha concentration increase in 540 nm, but on the contrary in 700nm.

  15. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  16. A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.

    Science.gov (United States)

    Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R

    2018-01-01

    There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.

  17. Integration of CFD codes and advanced combustion models for quantitative burnout determination

    Energy Technology Data Exchange (ETDEWEB)

    Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)

    2007-10-15

    CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.

  18. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    OpenAIRE

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution te...

  19. Quantitative mitral valve modeling using real-time three-dimensional echocardiography: technique and repeatability.

    Science.gov (United States)

    Jassar, Arminder Singh; Brinster, Clayton J; Vergnat, Mathieu; Robb, J Daniel; Eperjesi, Thomas J; Pouch, Alison M; Cheung, Albert T; Weiss, Stuart J; Acker, Michael A; Gorman, Joseph H; Gorman, Robert C; Jackson, Benjamin M

    2011-01-01

    Real-time three-dimensional (3D) echocardiography has the ability to construct quantitative models of the mitral valve (MV). Imaging and modeling algorithms rely on operator interpretation of raw images and may be subject to observer-dependent variability. We describe a comprehensive analysis technique to generate high-resolution 3D MV models and examine interoperator and intraoperator repeatability in humans. Patients with normal MVs were imaged using intraoperative transesophageal real-time 3D echocardiography. The annulus and leaflets were manually segmented using a TomTec Echo-View workstation. The resultant annular and leaflet point cloud was used to generate fully quantitative 3D MV models using custom Matlab algorithms. Eight images were subjected to analysis by two independent observers. Two sequential images were acquired for 6 patients and analyzed by the same observer. Each pair of annular tracings was compared with respect to conventional variables and by calculating the mean absolute distance between paired renderings. To compare leaflets, MV models were aligned so as to minimize their sum of squares difference, and their mean absolute difference was measured. Mean absolute annular and leaflet distance was 2.4±0.8 and 0.6±0.2 mm for the interobserver and 1.5±0.6 and 0.5±0.2 mm for the intraobserver comparisons, respectively. There was less than 10% variation in annular variables between comparisons. These techniques generate high-resolution, quantitative 3D models of the MV and can be used consistently to image the human MV with very small interoperator and intraoperator variability. These data lay the framework for reliable and comprehensive noninvasive modeling of the normal and diseased MV. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    Energy Technology Data Exchange (ETDEWEB)

    Scott, M. [Univ. of Glasgow (United Kingdom). Dept. of Statistics] [and others

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with (a) intercomparison of model predictions and (b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  1. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    Science.gov (United States)

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  2. Bifurcation analysis of an existing mathematical model reveals novel treatment strategies and suggests potential cure for type 1 diabetes.

    Science.gov (United States)

    Nielsen, Kenneth H M; Pociot, Flemming M; Ottesen, Johnny T

    2014-09-01

    Type 1 diabetes is a disease with serious personal and socioeconomic consequences that has attracted the attention of modellers recently. But as models of this disease tend to be complicated, there has been only limited mathematical analysis to date. Here we address this problem by providing a bifurcation analysis of a previously published mathematical model for the early stages of type 1 diabetes in diabetes-prone NOD mice, which is based on the data available in the literature. We also show positivity and the existence of a family of attracting trapping regions in the positive 5D cone, converging towards a smaller trapping region, which is the intersection over the family. All these trapping regions are compact sets, and thus, practical weak persistence is guaranteed. We conclude our analysis by proposing 4 novel treatment strategies: increasing the phagocytic ability of resting macrophages or activated macrophages, increasing the phagocytic ability of resting and activated macrophages simultaneously and lastly, adding additional macrophages to the site of inflammation. The latter seems counter-intuitive at first glance, but nevertheless it appears to be the most promising, as evidenced by recent results. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  3. Quantitative spatial analysis of rockfalls from road inventories: a combined statistical and physical susceptibility model

    Science.gov (United States)

    Böhme, M.; Derron, M.-H.; Jaboyedoff, M.

    2014-01-01

    Quantitative spatial analyses and statistical susceptibility assessments based on road inventories are often complicated due to the registration of impacts instead of source areas. A rockfall inventory from the Norwegian Directorate of Public Roads is analysed spatially in order to investigate potential controlling parameters in the Norwegian county Sogn and Fjordane. Quantitative spatial relationships are then used to model rockfall susceptibility with the help of the Weights-of-Evidence method. The controlling parameters tectono-stratigraphic position, quaternary geology, geological lineament density, relative relief and slope aspect resulted in the best performing model and thus yielded the basis for the statistical susceptibility map for the entire county of Sogn and Fjordane. Due to registered impacts instead of sources, the important parameter slope angle could not be included in the statistical models. Combining the statistical susceptibility model with a physically based model, restricts the susceptibility map to areas that are steep enough to represent a potential rockfall source. This combination makes it possible to use road inventories, with registered impacts instead of sources, for susceptibility modelling.

  4. Comparison of perfusion models for quantitative T1 weighted DCE-MRI of rectal cancer.

    Science.gov (United States)

    Gaa, Tanja; Neumann, Wiebke; Sudarski, Sonja; Attenberger, Ulrike I; Schönberg, Stefan O; Schad, Lothar R; Zöllner, Frank G

    2017-09-20

    In this work, the two compartment exchange model and two compartment uptake model were applied to obtain quantitative perfusion parameters in rectum carcinoma and the results were compared to those obtained by the deconvolution algorithm. Eighteen patients with newly diagnosed rectal carcinoma underwent 3 T MRI of the pelvis including a T1 weighted dynamic contrastenhanced (DCE) protocol before treatment. Mean values for Plasma Flow (PF), Plasma Volume (PV) and Mean Transit Time (MTT) were obtained for all three approaches and visualized in parameter cards. For the two compartment models, Akaike Information Criterion (AIC) and [Formula: see text] were calculated. Perfusion parameters determined with the compartment models show results in accordance with previous studies focusing on rectal cancer DCE-CT (PF2CX = 68 ± 44 ml/100 ml/min, PF2CU = 55 ± 36 ml/100 ml/min) with similar fit quality (AIC:169 ± 81/179 ± 77, [Formula: see text]:10 ± 12/9 ± 10). Values for PF are overestimated whereas PV and MTT are underestimated compared to results of the deconvolution algorithm. Significant differences were found among all models for perfusion parameters as well as between the AIC and [Formula: see text] values. Quantitative perfusion parameters are dependent on the chosen tracer kinetic model. According to the obtained parameters, all approaches seem capable of providing quantitative perfusion values in DCE-MRI of rectal cancer.

  5. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    Science.gov (United States)

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (pquantitative DCE-MRI, the quantitative DCE-MRI parameter K trans is a more sensitive and accurate index for detecting early reduced perfusion in osteoporotic bone.

  6. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  7. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    Science.gov (United States)

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.

  8. Modeling and mapping of cadmium in soils based on qualitative and quantitative auxiliary variables in a cadmium contaminated area.

    Science.gov (United States)

    Cao, Shanshan; Lu, Anxiang; Wang, Jihua; Huo, Lili

    2017-02-15

    The aim of this study was to measure the improvement in mapping accuracy of spatial distribution of Cd in soils by using geostatistical methods combined with auxiliary factors, especially qualitative variables. Significant correlations between Cd content and correlation environment variables that are easy to obtain (such as topographic factors, distance to residential area, land use types and soil types) were analyzed systematically and quantitatively. Based on 398 samples collected from a Cd contaminated area (Hunan Province, China), we estimated the spatial distribution of Cd in soils by using spatial interpolation models, including ordinary kriging (OK), and regression kriging (RK) with each auxiliary variable, all quantitative variables (RKWQ) and all auxiliary variables (RKWA). Results showed that mapping with RK was more consistent with the sampling data of the spatial distribution of Cd in the study area than mapping with OK. The performance indicators (smaller mean error, mean absolute error, root mean squared error values and higher relative improvement of RK than OK) indicated that the introduction of auxiliary variables can improve the prediction accuracy of Cd in soils for which the spatial structure could not be well captured by point-based observation (nugget to sill ratio=0.76) and strong relationships existed between variables to be predicted and auxiliary variables. The comparison of RKWA with RKWQ further indicated that the introduction of qualitative variables improved the prediction accuracy, and even weakened the effects of quantitative factors. Furthermore, the significantly different relative improvement with similar R2 and varying spatial dependence showed that a reasonable choice of auxiliary variables and analysis of spatial structure of regression residuals are equally important to ensure accurate predictions. Copyright © 2016. Published by Elsevier B.V.

  9. Multiplicative effects model with internal standard in mobile phase for quantitative liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen

    2014-07-01

    Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Science.gov (United States)

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  11. Geological Modelling and Validation of Geological Interpretations via Simulation and Classification of Quantitative Covariates

    Directory of Open Access Journals (Sweden)

    Amir Adeli

    2017-12-01

    Full Text Available This paper proposes a geostatistical approach for geological modelling and for validating an interpreted geological model, by identifying the areas of an ore deposit with a high probability of being misinterpreted, based on quantitative coregionalised covariates correlated with the geological categories. This proposal is presented through a case study of an iron ore deposit at a stage where the only available data are from exploration drill holes. This study consists of jointly simulating the quantitative covariates with no previous geological domaining. A change of variables is used to account for stoichiometric closure, followed by projection pursuit multivariate transformation, multivariate Gaussian simulation, and conditioning to the drill hole data. Subsequently, a decision tree classification algorithm is used to convert the simulated values into a geological category for each target block and realisation. The determination of the prior (ignoring drill hole data and posterior (conditioned to drill hole data probabilities of categories provides a means of identifying the blocks for which the interpreted category disagrees with the simulated quantitative covariates.

  12. A quantitative model for cyclin-dependent kinase control of the cell cycle: revisited.

    Science.gov (United States)

    Uhlmann, Frank; Bouchoux, Céline; López-Avilés, Sandra

    2011-12-27

    The eukaryotic cell division cycle encompasses an ordered series of events. Chromosomal DNA is replicated during S phase of the cell cycle before being distributed to daughter cells in mitosis. Both S phase and mitosis in turn consist of an intricately ordered sequence of molecular events. How cell cycle ordering is achieved, to promote healthy cell proliferation and avert insults on genomic integrity, has been a theme of Paul Nurse's research. To explain a key aspect of cell cycle ordering, sequential S phase and mitosis, Stern & Nurse proposed 'A quantitative model for cdc2 control of S phase and mitosis in fission yeast'. In this model, S phase and mitosis are ordered by their dependence on increasing levels of cyclin-dependent kinase (Cdk) activity. Alternative mechanisms for ordering have been proposed that rely on checkpoint controls or on sequential waves of cyclins with distinct substrate specificities. Here, we review these ideas in the light of experimental evidence that has meanwhile accumulated. Quantitative Cdk control emerges as the basis for cell cycle ordering, fine-tuned by cyclin specificity and checkpoints. We propose a molecular explanation for quantitative Cdk control, based on thresholds imposed by Cdk-counteracting phosphatases, and discuss its implications.

  13. Existence of new nonlocal field theory on noncommutative space and spiral flow in renormalization group analysis of matrix models

    Energy Technology Data Exchange (ETDEWEB)

    Kawamoto, Shoichi [Department of Physics, Chung-Yuan Christian University, Chung-Li 320, Taiwan, R.O.C. (China); Kuroki, Tsunehide [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe,Nagoya University, Nagoya 464-8602 (Japan)

    2015-06-10

    In the previous study http://dx.doi.org/10.1007/JHEP08(2012)168S. Kawamoto, D. Tomino and T. Kuroki, Large-N renormalization group on fuzzy sphere, Int. J. Mod. Phys. Conf. Ser. 21 (2013) 151. http://dx.doi.org/10.1002/prop.201400032, we formulate a matrix model renormalization group based on the fuzzy spherical harmonics with which a notion of high/low energy can be attributed to matrix elements, and show that it exhibits locality and various similarity to the usual Wilsonian renormalization group of quantum field theory. In this work, we continue the renormalization group analysis of a matrix model with emphasis on nonlocal interactions where the fields on antipodal points are coupled. They are indeed generated in the renormalization group procedure and are tightly related to the noncommutative nature of the geometry. We aim at formulating renormalization group equations including such nonlocal interactions and finding existence of nontrivial field theory with antipodal interactions on the fuzzy sphere. We find several nontrivial fixed points and calculate the scaling dimensions associated with them. We also consider the noncommutative plane limit and then no consistent fixed point is found. This contrast between the fuzzy sphere limit and the noncommutative plane limit would be manifestation in our formalism of the claim given by Chu, Madore and Steinacker that the former does not have UV/IR mixing, while the latter does.

  14. Life Cycle Assessment Modelling of Greenhouse Gas Emissions from Existing and Proposed Municipal Solid Waste Management System of Lahore, Pakistan

    Directory of Open Access Journals (Sweden)

    Adila Batool Syeda

    2017-12-01

    Full Text Available Open Dumping of indiscriminate municipal solid waste (MSW remarkably contributes to global warming (GW. Life Cycle Assessment modelling may be a useful tool for assessing the best waste management option regarding GW potential. The current study evaluates the contribution of an existing MSW management (MSWM system to greenhouse gases in Gulberg Town, Lahore, Pakistan. This research also presents a comparison of scenarios with different waste management options. Life Cycle Assessment methodology has been used to conduct the study. EASETECH has been used for modelling. The short-term scenarios (STSs have been developed to promote the thinking of integration of treatment technologies in the current waste management system within a few months. The results show that the major contribution to the total emissions comes from the anaerobic digestion of organic material from open waste dumps. Currently, recycling is the best treatment option for reducing the CO2-eq values in the study area. It was clarified that recycling is the best option for reducing the CO2-eq values, whereas biogasification comes in second in terms of savings and reduction. The integration of recycling and biogasification techniques would be a good solution.

  15. A Quantitative Risk Evaluation Model for Network Security Based on Body Temperature

    Directory of Open Access Journals (Sweden)

    Y. P. Jiang

    2016-01-01

    Full Text Available These days, in allusion to the traditional network security risk evaluation model, which have certain limitations for real-time, accuracy, characterization. This paper proposed a quantitative risk evaluation model for network security based on body temperature (QREM-BT, which refers to the mechanism of biological immune system and the imbalance of immune system which can result in body temperature changes, firstly, through the r-contiguous bits nonconstant matching rate algorithm to improve the detection quality of detector and reduce missing rate or false detection rate. Then the dynamic evolution process of the detector was described in detail. And the mechanism of increased antibody concentration, which is made up of activating mature detector and cloning memory detector, is mainly used to assess network risk caused by various species of attacks. Based on these reasons, this paper not only established the equation of antibody concentration increase factor but also put forward the antibody concentration quantitative calculation model. Finally, because the mechanism of antibody concentration change is reasonable and effective, which can effectively reflect the network risk, thus body temperature evaluation model was established in this paper. The simulation results showed that, according to body temperature value, the proposed model has more effective, real time to assess network security risk.

  16. Extraction, separation and quantitative structure-retention relationship modeling of essential oils in three herbs.

    Science.gov (United States)

    Wei, Yuhui; Xi, Lili; Chen, Dongxia; Wu, Xin'an; Liu, Huanxiang; Yao, Xiaojun

    2010-07-01

    The essential oils extracted from three kinds of herbs were separated by a 5% phenylmethyl silicone (DB-5MS) bonded phase fused-silica capillary column and identified by MS. Seventy-four of the compounds identified were selected as origin data, and their chemical structure and gas chromatographic retention times (RT) were performed to build a quantitative structure-retention relationship model by genetic algorithm and multiple linear regressions analysis. The predictive ability of the model was verified by internal validation (leave-one-out, fivefold, cross-validation and Y-scrambling). As for external validation, the model was also applied to predict the gas chromatographic RT of the 14 volatile compounds not used for model development from essential oil of Radix angelicae sinensis. The applicability domain was checked by the leverage approach to verify prediction reliability. The results obtained using several validations indicated that the best quantitative structure-retention relationship model was robust and satisfactory, could provide a feasible and effective tool for predicting the gas chromatographic RT of volatile compounds and could be also applied to help in identifying the compound with the same gas chromatographic RT.

  17. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  18. Quantitative immunohistochemical method for detection of wheat protein in model sausage

    Directory of Open Access Journals (Sweden)

    Zuzana Řezáčová Lukášková

    2014-01-01

    Full Text Available Since gluten can induce coeliac symptoms in hypersensitive consumers with coeliac disease, it is necessary to label foodstuffs containing it. In order to label foodstuffs, it is essential to find reliable methods to accurately determine the amount of wheat protein in food. The objective of this study was to compare the quantitative detection of wheat protein in model sausages by ELISA and immunohistochemical methods. Immunohistochemistry was combined with stereology to achieve quantitative results. High correlation between addition of wheat protein and compared methods was confirmed. For ELISA method the determined values were r = 0.98, P P < 0.01. Although ELISA is an accredited method, it was not reliable, unlike immunohistochemical methods (stereology SD = 3.1.

  19. A semi-quantitative model for risk appreciation and risk weighing

    DEFF Research Database (Denmark)

    Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko

    2009-01-01

    Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... of health impact categorization that has been successfully in force for several years within several emergency planning programs. Four health impact categories are distinguished: No-Health Impact, Low-Health Impact, Moderate-Health Impact and Severe-Health Impact. Two different charts are presented...

  20. Using the ACT-R architecture to specify 39 quantitative process models of decision making

    Directory of Open Access Journals (Sweden)

    Julian N. Marewski

    2011-08-01

    Full Text Available Hypotheses about decision processes are often formulated qualitatively and remain silent about the interplay of decision, memorial, and other cognitive processes. At the same time, existing decision models are specified at varying levels of detail, making it difficult to compare them. We provide a methodological primer on how detailed cognitive architectures such as ACT-R allow remedying these problems. To make our point, we address a controversy, namely, whether noncompensatory or compensatory processes better describe how people make decisions from the accessibility of memories. We specify 39 models of accessibility-based decision processes in ACT-R, including the noncompensatory recognition heuristic and various other popular noncompensatory and compensatory decision models. Additionally, to illustrate how such models can be tested, we conduct a model comparison, fitting the models to one experiment and letting them generalize to another. Behavioral data are best accounted for by race models. These race models embody the noncompensatory recognition heuristic and compensatory models as a race between competing processes, dissolving the dichotomy between existing decision models.

  1. Using the ACT-R architecture to specify 39 quantitative process models of decision making

    NARCIS (Netherlands)

    Marewski, Julian N.; Mehlhorn, Katja

    Hypotheses about decision processes are often formulated qualitatively and remain silent about the interplay of decision, memorial, and other cognitive processes. At the same time, existing decision models are specified at varying levels of detail, making it difficult to compare them. We provide a

  2. Quantitative retention-activity relationship models for quinolones using biopartitioning micellar chromatography.

    Science.gov (United States)

    Wu, Li-Ping; Chen, Yu; Wang, Shu-Rong; Chen, Cong; Ye, Li-Ming

    2008-01-01

    A simple and reproducible quantitative retention-activity relationship (QRAR) model utilizing biopartitioning micellar chromatography was developed for the biological parameter estimation of drugs. The correlation between retention factors of quinolones obtained in physiological conditions (pH, ionic strength) and biological activities was investigated using different second-order polynomial models. The predictive and interpretative ability of the chromatographic models was evaluated in terms of cross-validated data (RMSEC, RMSECV and RMSECVi). The aim was to obtain adequate QRAR models of half-life, clearance, volume of distribution, plasma protein combination rate, area under concentration-time curve and toxicity (LD50) of quinolones, and to elucidate the advantages and limitations of using a single parameter as independent variable for describing and estimating the activities. Copyright (c) 2007 John Wiley & Sons, Ltd.

  3. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course?

    OpenAIRE

    Haag, Eric S.; Gili Marbach-Ad

    2016-01-01

    Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemen...

  4. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  5. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling.

    Science.gov (United States)

    Dick, Daniel G; Maxwell, Erin E

    2015-07-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. [A quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse].

    Science.gov (United States)

    Zhang, Yu; Chen, Yuzhen; Hu, Chunguang; Zhang, Huaning; Bi, Zhenwang; Bi, Zhenqiang

    2015-05-01

    To construct a quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse and to find out effective interventions to reduce salmonella contamination. We constructed a modular process risk model (MPRM) from evisceration to chilling in Excel Sheet using the data of the process parameters in poultry and the Salmomella concentration surveillance of Jinan in 2012. The MPRM was simulated by @ risk software. The concentration of salmonella on carcass after chilling was 1.96MPN/g which was calculated by model. The sensitive analysis indicated that the correlation coefficient of the concentration of salmonella after defeathering and in chilling pool were 0.84 and 0.34,which were the primary factors to the concentration of salmonella on carcass after chilling. The study provided a quantitative assessment model structure for salmonella on carcass in poultry slaughterhouse. The risk manager could control the contamination of salmonella on carcass after chilling by reducing the concentration of salmonella after defeathering and in chilling pool.

  7. Research on Quantitative Models of Electric Vehicle Charging Stations Based on Principle of Energy Equivalence

    Directory of Open Access Journals (Sweden)

    Zhenpo Wang

    2013-01-01

    Full Text Available In order to adapt the matching and planning requirements of charging station in the electric vehicle (EV marketization application, with related layout theories of the gas stations, a location model of charging stations is established based on electricity consumption along the roads among cities. And a quantitative model of charging stations is presented based on the conversion of oil sales in a certain area. Both are combining the principle based on energy consuming equivalence substitution in process of replacing traditional vehicles with EVs. Defined data are adopted in the example analysis of two numerical case models and analyze the influence on charging station layout and quantity from the factors like the proportion of vehicle types and the EV energy consumption at the same time. The results show that the quantitative model of charging stations is reasonable and feasible. The number of EVs and the energy consumption of EVs bring more significant impact on the number of charging stations than that of vehicle type proportion, which provides a basis for decision making for charging stations construction layout in reality.

  8. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model

    Science.gov (United States)

    Yoshioka, S.; Matsuhana, B.; Tanaka, S.; Inouye, Y.; Oshima, N.; Kinoshita, S.

    2011-01-01

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model. PMID:20554565

  9. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway.

    Directory of Open Access Journals (Sweden)

    Chang Liu

    Full Text Available Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway.

  10. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Science.gov (United States)

    2012-07-17

    ... hypothetical influenza vaccine, and to seek from a range of experts, feedback on the current version of the... influenza vaccine benefit/risk; and (3) discuss possible applications of quantitative benefit/risk... HUMAN SERVICES Food and Drug Administration Use of Influenza Disease Models To Quantitatively Evaluate...

  11. Synthetic cannabinoids: In silico prediction of the cannabinoid receptor 1 affinity by a quantitative structure-activity relationship model.

    Science.gov (United States)

    Paulke, Alexander; Proschak, Ewgenij; Sommer, Kai; Achenbach, Janosch; Wunder, Cora; Toennes, Stefan W

    2016-03-14

    The number of new synthetic psychoactive compounds increase steadily. Among the group of these psychoactive compounds, the synthetic cannabinoids (SCBs) are most popular and serve as a substitute of herbal cannabis. More than 600 of these substances already exist. For some SCBs the in vitro cannabinoid receptor 1 (CB1) affinity is known, but for the majority it is unknown. A quantitative structure-activity relationship (QSAR) model was developed, which allows the determination of the SCBs affinity to CB1 (expressed as binding constant (Ki)) without reference substances. The chemically advance template search descriptor was used for vector representation of the compound structures. The similarity between two molecules was calculated using the Feature-Pair Distribution Similarity. The Ki values were calculated using the Inverse Distance Weighting method. The prediction model was validated using a cross validation procedure. The predicted Ki values of some new SCBs were in a range between 20 (considerably higher affinity to CB1 than THC) to 468 (considerably lower affinity to CB1 than THC). The present QSAR model can serve as a simple, fast and cheap tool to get a first hint of the biological activity of new synthetic cannabinoids or of other new psychoactive compounds. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. A Thermo-Hydro-Mechanical coupled Numerical modeling of Injection-induced seismicity on a pre-existing fault

    Science.gov (United States)

    Kim, Jongchan; Archer, Rosalind

    2017-04-01

    In terms of energy development (oil, gas and geothermal field) and environmental improvement (carbon dioxide sequestration), fluid injection into subsurface has been dramatically increased. As a side effect of these operations, a number of injection-induced seismic activities have also significantly risen. It is known that the main causes of induced seismicity are changes in local shear and normal stresses and pore pressure as well. This mechanism leads to increase in the probability of earthquake occurrence on permeable pre-existing fault zones predominantly. In this 2D fully coupled THM geothermal reservoir numerical simulation of injection-induced seismicity, we investigate the thermal, hydraulic and mechanical behavior of the fracture zone, considering a variety of 1) fault permeability, 2) injection rate and 3) injection temperature to identify major contributing parameters to induced seismic activity. We also calculate spatiotemporal variation of the Coulomb stress which is a combination of shear stress, normal stress and pore pressure and lastly forecast the seismicity rate on the fault zone by computing the seismic prediction model of Dieterich (1994).

  13. Methylene blue does not reverse existing neurofibrillary tangle pathology in the rTg4510 mouse model of tauopathy.

    Science.gov (United States)

    Spires-Jones, Tara L; Friedman, Taylor; Pitstick, Rose; Polydoro, Manuela; Roe, Allyson; Carlson, George A; Hyman, Bradley T

    2014-03-06

    Alzheimer's disease is characterized pathologically by aggregation of amyloid beta into senile plaques and aggregation of pathologically modified tau into neurofibrillary tangles. While changes in amyloid processing are strongly implicated in disease initiation, the recent failure of amyloid-based therapies has highlighted the importance of tau as a therapeutic target. "Tangle busting" compounds including methylene blue and analogous molecules are currently being evaluated as therapeutics in Alzheimer's disease. Previous studies indicated that methylene blue can reverse tau aggregation in vitro after 10 min, and subsequent studies suggested that high levels of drug reduce tau protein levels (assessed biochemically) in vivo. Here, we tested whether methylene blue could remove established neurofibrillary tangles in the rTg4510 model of tauopathy, which develops robust tangle pathology. We find that 6 weeks of methylene blue dosing in the water from 16 months to 17.5 months of age decreases soluble tau but does not remove sarkosyl insoluble tau, or histologically defined PHF1 or Gallyas positive tangle pathology. These data indicate that methylene blue treatment will likely not rapidly reverse existing tangle pathology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    Science.gov (United States)

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  15. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  16. Spatiotemporal microbiota dynamics from quantitative in vitro and in silico models of the gut

    Science.gov (United States)

    Hwa, Terence

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth behaviors, which ultimately dictate the gut microbiota composition. Combining measurements of bacterial growth physiology with analysis of published data on human physiology into a quantitative modeling framework, we show how hydrodynamic forces in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla in the gut. Our model quantitatively explains the observed variation of microbiota composition among healthy adults, and predicts colonic water absorption (manifested as stool consistency) and nutrient intake to be two key factors determining this composition. The model further reveals that both factors, which have been identified in recent correlative studies, exert their effects through the same mechanism: changes in colonic pH that differentially affect the growth of different bacteria. Our findings show that a predictive and mechanistic understanding of microbial ecology in the human gut is possible, and offer the hope for the rational design of intervention strategies to actively control the microbiota. This work is supported by the Bill and Melinda Gates Foundation.

  17. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  18. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.

  19. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  20. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    DEFF Research Database (Denmark)

    Han, Xue; Sandels, Claes; Zhu, Kun

    2013-01-01

    , comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation......There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system...

  1. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  2. Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters

    Science.gov (United States)

    Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad

    2017-02-01

    The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.

  3. Web Applications Vulnerability Management using a Quantitative Stochastic Risk Modeling Method

    Directory of Open Access Journals (Sweden)

    Sergiu SECHEL

    2017-01-01

    Full Text Available The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed de-bugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk charac-teristics (risk attributes and evaluate the results after 1.5 million rounds of Monte Carlo simu-lations.

  4. Quantitative evaluation of lake eutrophication responses under alternative water diversion scenarios: a water quality modeling based statistical analysis approach.

    Science.gov (United States)

    Liu, Yong; Wang, Yilin; Sheng, Hu; Dong, Feifei; Zou, Rui; Zhao, Lei; Guo, Huaicheng; Zhu, Xiang; He, Bin

    2014-01-15

    China is confronting the challenge of accelerated lake eutrophication, where Lake Dianchi is considered as the most serious one. Eutrophication control for Lake Dianchi began in the mid-1980s. However, decision makers have been puzzled by the lack of visible water quality response to past efforts given the tremendous investment. Therefore, decision makers desperately need a scientifically sound way to quantitatively evaluate the response of lake water quality to proposed management measures and engineering works. We used a water quality modeling based scenario analysis approach to quantitatively evaluate the eutrophication responses of Lake Dianchi to an under-construction water diversion project. The primary analytic framework was built on a three-dimensional hydrodynamic, nutrient fate and transport, as well as algae dynamics model, which has previously been calibrated and validated using historical data. We designed 16 scenarios to analyze the water quality effects of three driving forces, including watershed nutrient loading, variations in diverted inflow water, and lake water level. A two-step statistical analysis consisting of an orthogonal test analysis and linear regression was then conducted to distinguish the contributions of various driving forces to lake water quality. The analysis results show that (a) the different ways of managing the diversion projects would result in different water quality response in Lake Dianchi, though the differences do not appear to be significant; (b) the maximum reduction in annual average and peak Chl-a concentration from the various ways of diversion project operation are respectively 11% and 5%; (c) a combined 66% watershed load reduction and water diversion can eliminate the lake hypoxia volume percentage from the existing 6.82% to 3.00%; and (d) the water diversion will decrease the occurrence of algal blooms, and the effect of algae reduction can be enhanced if diverted water are seasonally allocated such that wet

  5. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    Science.gov (United States)

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  6. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  7. Quantitative computational models of molecular self-assembly in systems biology

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-06-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  8. Correlations of 3T DCE-MRI Quantitative Parameters with Microvessel Density in a Human-Colorectal-Cancer Xenograft Mouse Model

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sung Jun; An, Chan Sik; Koom, Woong Sub; Song, Ho Taek; Suh, Jin Suck [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2011-11-15

    To investigate the correlation between quantitative dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) parameters and microvascular density (MVD) in a human-colon-cancer xenograft mouse model using 3 Tesla MRI. A human-colon-cancer xenograft model was produced by subcutaneously inoculating 1 X 106 DLD-1 human-colon-cancer cells into the right hind limbs of 10 mice. The tumors were allowed to grow for two weeks and then assessed using MRI. DCE-MRI was performed by tail vein injection of 0.3 mmol/kg of gadolinium. A region of interest (ROI) was drawn at the midpoints along the z-axes of the tumors, and a Tofts model analysis was performed. The quantitative parameters (Ktrans, Kep and Ve) from the whole transverse ROI and the hotspot ROI of the tumor were calculated. Immunohistochemical microvessel staining was performed and analyzed according to Weidner's criteria at the corresponding MRI sections. Additional Hematoxylin and Eosin staining was performed to evaluate tumor necrosis. The Mann-Whitney test and Spearman's rho correlation analysis were performed to prove the existence of a correlation between the quantitative parameters, necrosis, and MVD. Whole transverse ROI of the tumor showed no significant relationship between the MVD values and quantitative DCE-MRI parameters. In the hotspot ROI, there was a difference in MVD between low and high group of Ktrans and Kep that had marginally statistical significance (ps = 0.06 and 0.07, respectively). Also, Ktrans and Kep were found to have an inverse relationship with MVD (r -0.61, p = 0.06 in Ktrans; r = -0.60, p = 0.07 in Kep). Quantitative analysis of T1-weighted DCE-MRI using hotspot ROI may provide a better histologic match than whole transverse section ROI. Within the hotspots, Ktrans and Kep tend to have a reverse correlation with MVD in this colon cancer mouse model.

  9. A quantitative comparison of the behavior of human ventricular cardiac electrophysiology models in tissue.

    Directory of Open Access Journals (Sweden)

    Mohamed M Elshrif

    . Furthermore, by indicating areas where existing models disagree, our findings suggest avenues for further experimental work.

  10. Satellite Contributions to the Quantitative Characterization of Biomass Burning for Climate Modeling

    Science.gov (United States)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-01-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  11. Quantitative vertebral fracture detection on DXA images using shape and appearance models.

    Science.gov (United States)

    Roberts, Martin; Cootes, Tim; Pacheco, Elisa; Adams, Judith

    2007-10-01

    Current quantitative morphometric methods of vertebral fracture detection lack specificity, particularly with mild fractures. We use more detailed shape and texture information to develop quantitative classifiers. The detailed shape and appearance of vertebrae on 360 lateral dual energy x-ray absorptiometry scans were statistically modeled, thus producing a set of shape and appearance parameters for each vertebra. The vertebrae were given a "gold standard" classification using a consensus reading by two radiologists. Linear discriminants were trained on the vertebral shape and appearance parameters. The appearance-based classifiers gave significantly better specificity than shape-based methods in all regions of the spine (overall specificity 92% at a sensitivity of 95%), while using the full shape parameters slightly improved specificity in the thoracic spine compared with using three standard height ratios. The main improvement was in the detection of mild fractures. Performance varied over different regions of the spine. False-positive rates at 95% sensitivity for the lumbar, mid-thoracic (T12-T10) and upper thoracic (T9-T7) regions were 2.9%, 14.6%, and 5.5%, respectively, compared with 6.4%, 32.6%, and 21.1% for three-height morphometry. The appearance and shape parameters of statistical models could provide more powerful quantitative classifiers of osteoporotic vertebral fracture, particularly mild fractures. False positive rates can be substantially reduced at high sensitivity by using an appearance-based classifier, because this can better distinguish between mild fractures and some kinds of non-fracture shape deformities.

  12. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong

    2013-01-01

    Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  13. Quantitative prediction of integrase inhibitor resistance from genotype through consensus linear regression modeling

    Directory of Open Access Journals (Sweden)

    Van der Borght Koen

    2013-01-01

    Full Text Available Abstract Background Integrase inhibitors (INI form a new drug class in the treatment of HIV-1 patients. We developed a linear regression modeling approach to make a quantitative raltegravir (RAL resistance phenotype prediction, as Fold Change in IC50 against a wild type virus, from mutations in the integrase genotype. Methods We developed a clonal genotype-phenotype database with 991 clones from 153 clinical isolates of INI naïve and RAL treated patients, and 28 site-directed mutants. We did the development of the RAL linear regression model in two stages, employing a genetic algorithm (GA to select integrase mutations by consensus. First, we ran multiple GAs to generate first order linear regression models (GA models that were stochastically optimized to reach a goal R2 accuracy, and consisted of a fixed-length subset of integrase mutations to estimate INI resistance. Secondly, we derived a consensus linear regression model in a forward stepwise regression procedure, considering integrase mutations or mutation pairs by descending prevalence in the GA models. Results The most frequently occurring mutations in the GA models were 92Q, 97A, 143R and 155H (all 100%, 143G (90%, 148H/R (89%, 148K (88%, 151I (81%, 121Y (75%, 143C (72%, and 74M (69%. The RAL second order model contained 30 single mutations and five mutation pairs (p 2 performance of this model on the clonal training data was 0.97, and 0.78 on an unseen population genotype-phenotype dataset of 171 clinical isolates from RAL treated and INI naïve patients. Conclusions We describe a systematic approach to derive a model for predicting INI resistance from a limited amount of clonal samples. Our RAL second order model is made available as an Additional file for calculating a resistance phenotype as the sum of integrase mutations and mutation pairs.

  14. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  15. Computationally efficient vascular input function models for quantitative kinetic modelling using DCE-MRI

    Energy Technology Data Exchange (ETDEWEB)

    Orton, Matthew R; D' Arcy, James A; Walker-Samuel, Simon; Collins, David J; Leach, Martin O [Cancer Research UK Clinical MR Research Group, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton, Surrey SM2 5PT (United Kingdom); Hawkes, David J; Atkinson, David [Centre for Medical Image Computing, University College London, WC1E 6BT (United Kingdom)

    2008-03-07

    A description of the vascular input function is needed to obtain tissue kinetic parameter estimates from dynamic contrast enhanced MRI (DCE-MRI) data. This paper describes a general modelling framework for defining compact functional forms to describe vascular input functions. By appropriately specifying the components of this model it is possible to generate models that are realistic, and that ensure that the tissue concentration curves can be analytically calculated. This means that the computations necessary to estimate parameters from measured data are relatively efficient, which is important if such methods are to become of use in clinical practice. Three models defined by four parameters, using exponential, gamma-variate and cosine descriptions of the bolus, are described and their properties investigated using simulations. The results indicate that if there is no plasma fraction, then the proposed models are indistinguishable. When a small plasma fraction is present the exponential model gives parameter estimates that are biassed by up to 50%, while the other two models give very little bias; up to 10% but less than 5% in most cases. With a larger plasma fraction the exponential model is again biassed, the gamma-variate model has a small bias, but the cosine model has a very little bias and is indistinguishable from the model used to generate the data. The computational speed of the analytic approaches is compared with a fast-Fourier-transform-based numerical convolution approach. The analytic methods are nearly 10 times faster than the numerical methods for the isolated computation of the convolution, and around 4-5 times faster when used in an optimization routine to obtain parameter estimates. These results were obtained from five example data sets, one of which was examined in more detail to compare the estimates obtained using the different models, and with literature values.

  16. Quantitative Outline-based Shape Analysis and Classification of Planetary Craterforms using Supervised Learning Models

    Science.gov (United States)

    Slezak, Thomas Joseph; Radebaugh, Jani; Christiansen, Eric

    2017-10-01

    The shapes of craterform morphology on planetary surfaces provides rich information about their origins and evolution. While morphologic information provides rich visual clues to geologic processes and properties, the ability to quantitatively communicate this information is less easily accomplished. This study examines the morphology of craterforms using the quantitative outline-based shape methods of geometric morphometrics, commonly used in biology and paleontology. We examine and compare landforms on planetary surfaces using shape, a property of morphology that is invariant to translation, rotation, and size. We quantify the shapes of paterae on Io, martian calderas, terrestrial basaltic shield calderas, terrestrial ash-flow calderas, and lunar impact craters using elliptic Fourier analysis (EFA) and the Zahn and Roskies (Z-R) shape function, or tangent angle approach to produce multivariate shape descriptors. These shape descriptors are subjected to multivariate statistical analysis including canonical variate analysis (CVA), a multiple-comparison variant of discriminant analysis, to investigate the link between craterform shape and classification. Paterae on Io are most similar in shape to terrestrial ash-flow calderas and the shapes of terrestrial basaltic shield volcanoes are most similar to martian calderas. The shapes of lunar impact craters, including simple, transitional, and complex morphology, are classified with a 100% rate of success in all models. Multiple CVA models effectively predict and classify different craterforms using shape-based identification and demonstrate significant potential for use in the analysis of planetary surfaces.

  17. A Penalized Mixture Model Approach in Genotype/Phenotype Association Analysis for Quantitative Phenotypes

    Science.gov (United States)

    Li, Lang; Borges, Silvana; Jason, Robarge D.; Shen, Changyu; Desta, Zeruesenay; Flockhart, David

    2010-01-01

    A mixture normal model has been developed to partition genotypes in predicting quantitative phenotypes. Its estimation and inference are performed through an EM algorithm. This approach can conduct simultaneous genotype clustering and hypothesis testing. It is a valuable method for predicting the distribution of quantitative phenotypes among multi-locus genotypes across genes or within a gene. This mixture model’s performance is evaluated in data analyses for two pharmacogenetics studies. In one example, thirty five CYP2D6 genotypes were partitioned into three groups to predict pharmacokinetics of a breast cancer drug, Tamoxifen, a CYP2D6 substrate (p-value = 0.04). In a second example, seventeen CYP2B6 genotypes were categorized into three clusters to predict CYP2B6 protein expression (p-value = 0.002). The biological validities of both partitions are examined using established function of CYP2D6 and CYP2B6 alleles. In both examples, we observed genotypes clustered in the same group to have high functional similarities. The power and recovery rate of the true partition for the mixture model approach are investigated in statistical simulation studies, where it outperforms another published method. PMID:20467479

  18. A Penalized Mixture Model Approach in Genotype/Phenotype Association Analysis for Quantitative Phenotypes

    Directory of Open Access Journals (Sweden)

    Lang Li

    2010-04-01

    Full Text Available A mixture normal model has been developed to partition genotypes in predicting quantitative phenotypes. Its estimation and inference are performed through an EM algorithm. This approach can conduct simultaneous genotype clustering and hypothesis testing. It is a valuable method for predicting the distribution of quantitative phenotypes among multi-locus genotypes across genes or within a gene. This mixture model’s performance is evaluated in data analyses for two pharmacogenetics studies. In one example, thirty five CYP2D6 genotypes were partitioned into three groups to predict pharmacokinetics of a breast cancer drug, Tamoxifen, a CYP2D6 substrate (p-value = 0.04. In a second example, seventeen CYP2B6 genotypes were categorized into three clusters to predict CYP2B6 protein expression (p-value = 0.002. The biological validities of both partitions are examined using established function of CYP2D6 and CYP2B6 alleles. In both examples, we observed genotypes clustered in the same group to have high functional similarities. The power and recovery rate of the true partition for the mixture model approach are investigated in statistical simulation studies, where it outperforms another published method.

  19. Quantitative evaluation of mucosal vascular contrast in narrow band imaging using Monte Carlo modeling

    Science.gov (United States)

    Le, Du; Wang, Quanzeng; Ramella-Roman, Jessica; Pfefer, Joshua

    2012-06-01

    Narrow-band imaging (NBI) is a spectrally-selective reflectance imaging technique for enhanced visualization of superficial vasculature. Prior clinical studies have indicated NBI's potential for detection of vasculature abnormalities associated with gastrointestinal mucosal neoplasia. While the basic mechanisms behind the increased vessel contrast - hemoglobin absorption and tissue scattering - are known, a quantitative understanding of the effect of tissue and device parameters has not been achieved. In this investigation, we developed and implemented a numerical model of light propagation that simulates NBI reflectance distributions. This was accomplished by incorporating mucosal tissue layers and vessel-like structures in a voxel-based Monte Carlo algorithm. Epithelial and mucosal layers as well as blood vessels were defined using wavelength-specific optical properties. The model was implemented to calculate reflectance distributions and vessel contrast values as a function of vessel depth (0.05 to 0.50 mm) and diameter (0.01 to 0.10 mm). These relationships were determined for NBI wavelengths of 410 nm and 540 nm, as well as broadband illumination common to standard endoscopic imaging. The effects of illumination bandwidth on vessel contrast were also simulated. Our results provide a quantitative analysis of the effect of absorption and scattering on vessel contrast. Additional insights and potential approaches for improving NBI system contrast are discussed.

  20. Understanding responder neurobiology in schizophrenia using a quantitative systems pharmacology model: application to iloperidone.

    Science.gov (United States)

    Geerts, Hugo; Roberts, Patrick; Spiros, Athan; Potkin, Steven

    2015-04-01

    The concept of targeted therapies remains a holy grail for the pharmaceutical drug industry for identifying responder populations or new drug targets. Here we provide quantitative systems pharmacology as an alternative to the more traditional approach of retrospective responder pharmacogenomics analysis and applied this to the case of iloperidone in schizophrenia. This approach implements the actual neurophysiological effect of genotypes in a computer-based biophysically realistic model of human neuronal circuits, is parameterized with human imaging and pathology, and is calibrated by clinical data. We keep the drug pharmacology constant, but allowed the biological model coupling values to fluctuate in a restricted range around their calibrated values, thereby simulating random genetic mutations and representing variability in patient response. Using hypothesis-free Design of Experiments methods the dopamine D4 R-AMPA (receptor-alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptor coupling in cortical neurons was found to drive the beneficial effect of iloperidone, likely corresponding to the rs2513265 upstream of the GRIA4 gene identified in a traditional pharmacogenomics analysis. The serotonin 5-HT3 receptor-mediated effect on interneuron gamma-aminobutyric acid conductance was identified as the process that moderately drove the differentiation of iloperidone versus ziprasidone. This paper suggests that reverse-engineered quantitative systems pharmacology is a powerful alternative tool to characterize the underlying neurobiology of a responder population and possibly identifying new targets. © The Author(s) 2015.

  1. Report on Integration of Existing Grid Models for N-R HES Interaction Focused on Balancing Authorities for Sub-hour Penalties and Opportunities

    Energy Technology Data Exchange (ETDEWEB)

    McJunkin, Timothy [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-01

    This report provides a summary of the effort in the Nuclear-Renewable Hybrid Energy System (N-R HES) project on the level 4 milestone to consider integration of existing grid models into the factors for optimization on shorter time intervals than the existing electric grid models with the Risk Analysis Virtual Environment (RAVEN) and Modelica [1] optimizations and economic analysis that are the focus of the project to date.

  2. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  3. Variational formulation of a quantitative phase-field model for nonisothermal solidification in a multicomponent alloy

    Science.gov (United States)

    Ohno, Munekazu; Takaki, Tomohiro; Shibuta, Yasushi

    2017-09-01

    A variational formulation of a quantitative phase-field model is presented for nonisothermal solidification in a multicomponent alloy with two-sided asymmetric diffusion. The essential ingredient of this formulation is that the diffusion fluxes for conserved variables in both the liquid and solid are separately derived from functional derivatives of the total entropy and then these fluxes are related to each other on the basis of the local equilibrium conditions. In the present formulation, the cross-coupling terms between the phase-field and conserved variables naturally arise in the phase-field equation and diffusion equations, one of which corresponds to the antitrapping current, the phenomenological correction term in early nonvariational models. In addition, this formulation results in diffusivities of tensor form inside the interface. Asymptotic analysis demonstrates that this model can exactly reproduce the free-boundary problem in the thin-interface limit. The present model is widely applicable because approximations and simplifications are not formally introduced into the bulk's free energy densities and because off-diagonal elements of the diffusivity matrix are explicitly taken into account. Furthermore, we propose a nonvariational form of the present model to achieve high numerical performance. A numerical test of the nonvariational model is carried out for nonisothermal solidification in a binary alloy. It shows fast convergence of the results with decreasing interface thickness.

  4. Quantitative Circulatory Physiology: an integrative mathematical model of human physiology for medical education.

    Science.gov (United States)

    Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L

    2007-06-01

    We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.

  5. How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies

    Science.gov (United States)

    Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A

    2013-01-01

    A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026

  6. Enzyme-incorporated erythrocyte ghosts: a new model system for quantitative enzyme cytochemistry.

    Science.gov (United States)

    Raap, A K; Van Duijn, P

    1981-12-01

    The preparation and properties of a new microscopic model system for quantitative enzyme cytochemistry are described. The enzyme to be studied is entrapped in human erythrocyte ghosts by a simple hypotonic procedure. After fixation in suspension the ghosts can be analyzed both biochemically and cytochemically. The system has been tested with alkaline phosphatase. It is demonstrated that an azo method that uses naphthol AS-MX phosphate as substrate and 4-aminodiphenylamine diazonium salt as coupling agent can detect very low levels of enzymic activity. The biochemical activity determinations of alkaline phosphatase loaded erythrocyte ghosts were found to correlate linearly with cytophotometric activity determinations. The possible use of the erythrocyte ghost model system for other cytochemical applications is briefly discussed.

  7. Whole-brain ex-vivo quantitative MRI of the cuprizone mouse model

    Directory of Open Access Journals (Sweden)

    Tobias C. Wood

    2016-11-01

    Full Text Available Myelin is a critical component of the nervous system and a major contributor to contrast in Magnetic Resonance (MR images. However, the precise contribution of myelination to multiple MR modalities is still under debate. The cuprizone mouse is a well-established model of demyelination that has been used in several MR studies, but these have often imaged only a single slice and analysed a small region of interest in the corpus callosum. We imaged and analyzed the whole brain of the cuprizone mouse ex-vivo using high-resolution quantitative MR methods (multi-component relaxometry, Diffusion Tensor Imaging (DTI and morphometry and found changes in multiple regions, including the corpus callosum, cerebellum, thalamus and hippocampus. The presence of inflammation, confirmed with histology, presents difficulties in isolating the sensitivity and specificity of these MR methods to demyelination using this model.

  8. Mixed micellar liquid chromatography methods: modelling quantitative retention-activity relationships of angiotensin converting enzyme inhibitors.

    Science.gov (United States)

    Wu, Li-Ping; Cui, Yan; Xiong, Mei-Jin; Wang, Shu-Rong; Chen, Cong; Ye, Li-Ming

    2008-11-01

    The capability of biopartitioning micellar chromatography (BMC), using pure Brij35 solution and mixed micellar system of Brij35-SDS (85:15) as mobile phase, to describe and estimate bioactivities of angiotensin converting enzyme inhibitors at different pH has been studied. Quantitative retention-activity relationships (QRAR) in BMC were investigated for these compounds. The obtained BMC(Brij35-SDS)-QRAR models were compared with the traditional BMC(Brij35)-QRAR, and better statistically models were obtained using Brij35-SDS retention data. The superiority of BMC(Brij35-SDS)-QRAR is due to the fact that the mixed micellar mobile phase can simulate the resting membrane potential and the conformation of the long hydrophilic polyoxyethylene chains remains unchanged.

  9. Biopartitioning micellar chromatography separation methods: modelling quantitative retention-activity relationships of cephalosporins.

    Science.gov (United States)

    Wu, Li-Ping; Ye, Li-Ming; Chen, Cong; Wu, Jia-Qi; Chen, Yu

    2008-06-01

    In this article, recent applications of chromatographic systems, particularly biopartitioning micellar chromatography (BMC) systems based on amphiphilic structures, have been reported. The aim is to take a look at the capability of quantitative retention-activity relationship (QRAR) models with BMC to describe and/or estimate the bioactivity of cephalosporins. Better qualification of BMC systems was obtained according to the octanol-water partition coefficient (log P); the bioactivity parameters (Lag-T, T(1/2beta), F%, T(1/a), P%, AUC and C(max)) were correlated with the retention factors of cephalosporins processed by Alltech-chromstation software, and the classical data were compared with the predictive values based on QRAR models. The results indicate that using only one descriptor (the retention factor, k) to explain the pharmacokinetic and pharmacodynamic properties of cephalosporins is adequate, and this in vitro approach is an advanced tool for pharmacodynamics research. Copyright (c) 2008 John Wiley & Sons, Ltd.

  10. Quantitative Structure Activity Relationship Models for the Antioxidant Activity of Polysaccharides.

    Directory of Open Access Journals (Sweden)

    Zhiming Li

    Full Text Available In this study, quantitative structure activity relationship (QSAR models for the antioxidant activity of polysaccharides were developed with 50% effective concentration (EC50 as the dependent variable. To establish optimum QSAR models, multiple linear regressions (MLR, support vector machines (SVM and artificial neural networks (ANN were used, and 11 molecular descriptors were selected. The optimum QSAR model for predicting EC50 of DPPH-scavenging activity consisted of four major descriptors. MLR model gave EC50 = 0.033Ara-0.041GalA-0.03GlcA-0.025PC+0.484, and MLR fitted the training set with R = 0.807. ANN model gave the improvement of training set (R = 0.96, RMSE = 0.018 and test set (R = 0.933, RMSE = 0.055 which indicated that it was more accurately than SVM and MLR models for predicting the DPPH-scavenging activity of polysaccharides. 67 compounds were used for predicting EC50 of the hydroxyl radicals scavenging activity of polysaccharides. MLR model gave EC50 = 0.12PC+0.083Fuc+0.013Rha-0.02UA+0.372. A comparison of results from models indicated that ANN model (R = 0.944, RMSE = 0.119 was also the best one for predicting the hydroxyl radicals scavenging activity of polysaccharides. MLR and ANN models showed that Ara and GalA appeared critical in determining EC50 of DPPH-scavenging activity, and Fuc, Rha, uronic acid and protein content had a great effect on the hydroxyl radicals scavenging activity of polysaccharides. The antioxidant activity of polysaccharide usually was high in MW range of 4000-100000, and the antioxidant activity could be affected simultaneously by other polysaccharide properties, such as uronic acid and Ara.

  11. Quantitative Structure Activity Relationship Models for the Antioxidant Activity of Polysaccharides.

    Science.gov (United States)

    Li, Zhiming; Nie, Kaiying; Wang, Zhaojing; Luo, Dianhui

    In this study, quantitative structure activity relationship (QSAR) models for the antioxidant activity of polysaccharides were developed with 50% effective concentration (EC50) as the dependent variable. To establish optimum QSAR models, multiple linear regressions (MLR), support vector machines (SVM) and artificial neural networks (ANN) were used, and 11 molecular descriptors were selected. The optimum QSAR model for predicting EC50 of DPPH-scavenging activity consisted of four major descriptors. MLR model gave EC50 = 0.033Ara-0.041GalA-0.03GlcA-0.025PC+0.484, and MLR fitted the training set with R = 0.807. ANN model gave the improvement of training set (R = 0.96, RMSE = 0.018) and test set (R = 0.933, RMSE = 0.055) which indicated that it was more accurately than SVM and MLR models for predicting the DPPH-scavenging activity of polysaccharides. 67 compounds were used for predicting EC50 of the hydroxyl radicals scavenging activity of polysaccharides. MLR model gave EC50 = 0.12PC+0.083Fuc+0.013Rha-0.02UA+0.372. A comparison of results from models indicated that ANN model (R = 0.944, RMSE = 0.119) was also the best one for predicting the hydroxyl radicals scavenging activity of polysaccharides. MLR and ANN models showed that Ara and GalA appeared critical in determining EC50 of DPPH-scavenging activity, and Fuc, Rha, uronic acid and protein content had a great effect on the hydroxyl radicals scavenging activity of polysaccharides. The antioxidant activity of polysaccharide usually was high in MW range of 4000-100000, and the antioxidant activity could be affected simultaneously by other polysaccharide properties, such as uronic acid and Ara.

  12. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    Directory of Open Access Journals (Sweden)

    Douglas Blackiston

    2010-12-01

    Full Text Available A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays. The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science.

  13. Evaluation of New Zealand's high-seas bottom trawl closures using predictive habitat models and quantitative risk assessment.

    Directory of Open Access Journals (Sweden)

    Andrew J Penney

    Full Text Available United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007 establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1 Where are vulnerable marine systems (VMEs likely to occur?; 2 What is the likelihood of fisheries interaction with these VMEs?; and 3 What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas.

  14. A generalized quantitative antibody homeostasis model: maintenance of global antibody equilibrium by effector functions.

    Science.gov (United States)

    Prechl, József

    2017-11-01

    The homeostasis of antibodies can be characterized as a balanced production, target-binding and receptor-mediated elimination regulated by an interaction network, which controls B-cell development and selection. Recently, we proposed a quantitative model to describe how the concentration and affinity of interacting partners generates a network. Here we argue that this physical, quantitative approach can be extended for the interpretation of effector functions of antibodies. We define global antibody equilibrium as the zone of molar equivalence of free antibody, free antigen and immune complex concentrations and of dissociation constant of apparent affinity: [Ab]=[Ag]=[AbAg]= K D . This zone corresponds to the biologically relevant K D range of reversible interactions. We show that thermodynamic and kinetic properties of antibody-antigen interactions correlate with immunological functions. The formation of stable, long-lived immune complexes correspond to a decrease of entropy and is a prerequisite for the generation of higher-order complexes. As the energy of formation of complexes increases, we observe a gradual shift from silent clearance to inflammatory reactions. These rules can also be applied to complement activation-related immune effector processes, linking the physicochemical principles of innate and adaptive humoral responses. Affinity of the receptors mediating effector functions shows a wide range of affinities, allowing the continuous sampling of antibody-bound antigen over the complete range of concentrations. The generation of multivalent, multicomponent complexes triggers effector functions by crosslinking these receptors on effector cells with increasing enzymatic degradation potential. Thus, antibody homeostasis is a thermodynamic system with complex network properties, nested into the host organism by proper immunoregulatory and effector pathways. Maintenance of global antibody equilibrium is achieved by innate qualitative signals modulating a

  15. Antiproliferative Pt(IV) complexes: synthesis, biological activity, and quantitative structure-activity relationship modeling.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico

    2010-09-01

    Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.

  16. Tree Root System Characterization and Volume Estimation by Terrestrial Laser Scanning and Quantitative Structure Modeling

    Directory of Open Access Journals (Sweden)

    Aaron Smith

    2014-12-01

    Full Text Available The accurate characterization of three-dimensional (3D root architecture, volume, and biomass is important for a wide variety of applications in forest ecology and to better understand tree and soil stability. Technological advancements have led to increasingly more digitized and automated procedures, which have been used to more accurately and quickly describe the 3D structure of root systems. Terrestrial laser scanners (TLS have successfully been used to describe aboveground structures of individual trees and stand structure, but have only recently been applied to the 3D characterization of whole root systems. In this study, 13 recently harvested Norway spruce root systems were mechanically pulled from the soil, cleaned, and their volumes were measured by displacement. The root systems were suspended, scanned with TLS from three different angles, and the root surfaces from the co-registered point clouds were modeled with the 3D Quantitative Structure Model to determine root architecture and volume. The modeling procedure facilitated the rapid derivation of root volume, diameters, break point diameters, linear root length, cumulative percentages, and root fraction counts. The modeled root systems underestimated root system volume by 4.4%. The modeling procedure is widely applicable and easily adapted to derive other important topological and volumetric root variables.

  17. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    Science.gov (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  18. Non Linear Programming (NLP formulation for quantitative modeling of protein signal transduction pathways.

    Directory of Open Access Journals (Sweden)

    Alexander Mitsos

    Full Text Available Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i excessive CPU time requirements and ii loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  19. Quantitative retention-activity relationship models of angiotensin converting enzyme inhibitors using biopartitioning micellar chromatography.

    Science.gov (United States)

    Wang, Shu-Rong; Chen, Cong; Xiong, Mei-Jin; Wu, Li-Ping; Ye, Li-Ming

    2010-02-01

    Biopartitioning micellar chromatography (BMC) is a mode of micellar liquid chromatography that uses micellar mobile phases of Brij35 under adequate experimental conditions and can simulate biopartioning process of many kinds of drugs and describe their biological behavior. The capability of BMC to describe and estimate pharmacokinetic and pharmacodynamic parameters of angiotensin-converting enzyme inhibitors (ACEIs) had been studied in this paper. The correlation between retention factors of ACEIs obtained using BMC and bioactivity parameters (half-life, volume of distribution, clearance, and IC(50)) was investigated utilizing a second-order polynomial model. The P-values obtained for half-life, volume of distribution, clearance, and IC(50) models were less than 0.05, and the r(2) of those four models were 0.89, 0.98, 0.94, and 0.97, with r(2)(adj) (adjusted for freedom degrees) being 0.85, 0.98, 0.91, and 0.95, respectively. The predictive and interpretative ability of the chromatographic models was evaluated in terms of cross-validated data [root mean squared error of calibration (RMSEC), root mean squared error of cross-validation (leave-one-out) (RMSECV), and root mean squared error of cross-validation (leave-one-out) for interpolated data (RMSECVi)]. The quantitative retention-activity relationship (QRAR) models of ACEIs developed in this paper may be a useful approach to screening new chemicals in the early stage of development.

  20. Quantitative Limits on Small Molecule Transport via the Electropermeome - Measuring and Modeling Single Nanosecond Perturbations.

    Science.gov (United States)

    Sözer, Esin B; Levine, Zachary A; Vernier, P Thomas

    2017-03-03

    The detailed molecular mechanisms underlying the permeabilization of cell membranes by pulsed electric fields (electroporation) remain obscure despite decades of investigative effort. To advance beyond descriptive schematics to the development of robust, predictive models, empirical parameters in existing models must be replaced with physics- and biology-based terms anchored in experimental observations. We report here absolute values for the uptake of YO-PRO-1, a small-molecule fluorescent indicator of membrane integrity, into cells after a single electric pulse lasting only 6 ns. We correlate these measured values, based on fluorescence microphotometry of hundreds of individual cells, with a diffusion-based geometric analysis of pore-mediated transport and with molecular simulations of transport across electropores in a phospholipid bilayer. The results challenge the "drift and diffusion through a pore" model that dominates conventional explanatory schemes for the electroporative transfer of small molecules into cells and point to the necessity for a more complex model.

  1. Linear approaches to intramolecular Förster resonance energy transfer probe measurements for quantitative modeling.

    Directory of Open Access Journals (Sweden)

    Marc R Birtwistle

    Full Text Available Numerous unimolecular, genetically-encoded Förster Resonance Energy Transfer (FRET probes for monitoring biochemical activities in live cells have been developed over the past decade. As these probes allow for collection of high frequency, spatially resolved data on signaling events in live cells and tissues, they are an attractive technology for obtaining data to develop quantitative, mathematical models of spatiotemporal signaling dynamics. However, to be useful for such purposes the observed FRET from such probes should be related to a biological quantity of interest through a defined mathematical relationship, which is straightforward when this relationship is linear, and can be difficult otherwise. First, we show that only in rare circumstances is the observed FRET linearly proportional to a biochemical activity. Therefore in most cases FRET measurements should only be compared either to explicitly modeled probes or to concentrations of products of the biochemical activity, but not to activities themselves. Importantly, we find that FRET measured by standard intensity-based, ratiometric methods is inherently non-linear with respect to the fraction of probes undergoing FRET. Alternatively, we find that quantifying FRET either via (1 fluorescence lifetime imaging (FLIM or (2 ratiometric methods where the donor emission intensity is divided by the directly-excited acceptor emission intensity (denoted R(alt is linear with respect to the fraction of probes undergoing FRET. This linearity property allows one to calculate the fraction of active probes based on the FRET measurement. Thus, our results suggest that either FLIM or ratiometric methods based on R(alt are the preferred techniques for obtaining quantitative data from FRET probe experiments for mathematical modeling purposes.

  2. Quantitative fit assessment of tibial nail designs using 3D computer modelling.

    Science.gov (United States)

    Schmutz, B; Rathnayaka, K; Wullschleger, M E; Meek, J; Schuetz, M A

    2010-02-01

    Intramedullary nailing is the standard fixation method for displaced diaphyseal fractures of the tibia in adults. The bends in modern tibial nails allow for an easier insertion, enhance the 'bone-nail construct' stability, and reduce axial malalignments of the main fragments. Anecdotal clinical evidence indicates that current nail designs do not fit optimally for patients of Asian origin. The aim of this study was to develop a method to quantitatively assess the anatomical fitting of two different nail designs for Asian tibiae by utilising 3D computer modelling. We used 3D models of two different tibial nail designs (ETN (Expert Tibia Nail) and ETN-Proximal-Bend, Synthes), and 20 CT-based 3D cortex models of Japanese cadaver tibiae. With the aid of computer graphical methods, the 3D nail models were positioned inside the medullary cavity of the intact 3D tibia models. The anatomical fitting between nail and bone was assessed by the extent of the nail protrusion from the medullary cavity into the cortical bone, in a real bone this might lead to axial malalignments of the main fragments. The fitting was quantified in terms of the total surface area, and the maximum distance by which the nail was protruding into the cortex of the virtual bone model. In all 20 bone models, the total area of the nail protruding from the medullary cavity was smaller for the ETN-Proximal-Bend (average 540 mm(2)) compared to the ETN (average 1044 mm(2)). Also, the maximum distance of the nail protruding from the medullary cavity was smaller for the ETN-Proximal-Bend (average 1.2mm) compared to the ETN (average 2.7 mm). The differences were statistically significant (p<0.05) for both the total surface area and the maximum distance measurements. By utilising computer graphical methods it was possible to conduct a quantitative fit assessment of different nail designs. The ETN-Proximal-Bend shows a statistical significantly better intramedullary fit with less cortical protrusion than the

  3. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  4. Quantitative Proteomic and Phosphoproteomic Comparison of 2D and 3D Colon Cancer Cell Culture Models.

    Science.gov (United States)

    Yue, Xiaoshan; Lukowski, Jessica K; Weaver, Eric M; Skube, Susan B; Hummon, Amanda B

    2016-12-02

    Cell cultures are widely used model systems. Some immortalized cell lines can be grown in either two-dimensional (2D) adherent monolayers or in three-dimensional (3D) multicellular aggregates, or spheroids. Here, the quantitative proteome and phosphoproteome of colon carcinoma HT29 cells cultures in 2D monolayers and 3D spheroids were compared with a stable isotope labeling of amino acids (SILAC) labeling strategy. Two biological replicates from each sample were examined, and notable differences in both the proteome and the phosphoproteome were determined by nanoliquid chromatography tandem mass spectrometry (LC-MS/MS) to assess how growth configuration affects molecular expression. A total of 5867 protein groups, including 2523 phosphoprotein groups and 8733 phosphopeptides were identified in the samples. The Gene Ontology analysis revealed enriched GO terms in the 3D samples for RNA binding, nucleic acid binding, enzyme binding, cytoskeletal protein binding, and histone binding for their molecular functions (MF) and in the process of cell cycle, cytoskeleton organization, and DNA metabolic process for the biological process (BP). The KEGG pathway analysis indicated that 3D cultures are enriched for oxidative phosphorylation pathways, metabolic pathways, peroxisome pathways, and biosynthesis of amino acids. In contrast, analysis of the phosphoproteomes indicated that 3D cultures have decreased phosphorylation correlating with slower growth rates and lower cell-to-extracellular matrix interactions. In sum, these results provide quantitative assessments of the effects on the proteome and phosphoproteome of culturing cells in 2D versus 3D cell culture configurations.

  5. Qualitative and quantitative accuracy of CAOS in a standardized in vitro spine model.

    Science.gov (United States)

    Arand, Markus; Schempf, Michael; Fleiter, Thorsten; Kinzl, Lothar; Gebhard, Florian

    2006-09-01

    Pedicle breach with screw implantation is relatively common. For clinical application of computer-assisted orthopaedic surgery, it is important to quantitatively know the accuracy and localization of any guidance modality. We ascertained the accuracy of computed tomography and C-arm-based navigated drilling versus conventional fluoroscopy using an artificial thoracic and lumbar spine model. The 3.2-mm diameter transpedicle drilling target was the center of a 4-mm steel ball fixed in the anterior left pedicle axis. After drilling, we used computed tomography to verify the position of the steel ball and the canal and visually explored for cortex perforation. Quantitative vector calculation showed computed tomography-based navigation had the greatest accuracy (median, d(thoracic) = 1.4 mm; median, d(lumbar) = 1.8 mm) followed by C-arm navigation (median, d(thoracic) = 2.6 mm; median, d(lumbar) = 2 mm) and the conventional procedure (median, d(thoracic) = 2.2 mm; median, d(lumbar) = 2.7 mm). Visual examination showed a decreased perforation rate in navigated drillings. We found no correlation between pedicle breaches and inaccurate drilling. The data suggest computer-assisted orthopaedic surgery cannot provide sub-millimeter accuracy, and complete prevention of pedicle perforation is not realistic.

  6. A Suite of Models to Support the Quantitative Assessment of Spread in Pest Risk Analysis

    Science.gov (United States)

    Robinet, Christelle; Kehlenbeck, Hella; Kriticos, Darren J.; Baker, Richard H. A.; Battisti, Andrea; Brunel, Sarah; Dupin, Maxime; Eyre, Dominic; Faccoli, Massimo; Ilieva, Zhenya; Kenis, Marc; Knight, Jon; Reynaud, Philippe; Yart, Annie; van der Werf, Wopke

    2012-01-01

    Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice. PMID:23056174

  7. A suite of models to support the quantitative assessment of spread in pest risk analysis.

    Directory of Open Access Journals (Sweden)

    Christelle Robinet

    Full Text Available Pest Risk Analyses (PRAs are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens. Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice.

  8. The JBEI quantitative metabolic modeling library (jQMM): a python library for modeling microbial metabolism

    DEFF Research Database (Denmark)

    Birkel, Garrett W.; Ghosh, Amit; Kumar, Vinay S.

    2017-01-01

    analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed.Results: The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes...

  9. Quantitative Comparison of a New Ab Initio Micrometeor Ablation Model with an Observationally Verifiable Standard Model

    Science.gov (United States)

    Meisel, David D.; Szasz, Csilla; Kero, Johan

    2008-06-01

    The Arecibo UHF radar is able to detect the head-echos of micron-sized meteoroids up to velocities of 75 km/s over a height range of 80 140 km. Because of their small size there are many uncertainties involved in calculating their above atmosphere properties as needed for orbit determination. An ab initio model of meteor ablation has been devised that should work over the mass range 10-16 kg to 10-7 kg, but the faint end of this range cannot be observed by any other method and so direct verification is not possible. On the other hand, the EISCAT UHF radar system detects micrometeors in the high mass part of this range and its observations can be fit to a “standard” ablation model and calibrated to optical observations (Szasz et al. 2007). In this paper, we present a preliminary comparison of the two models, one observationally confirmable. Among the features of the ab initio model that are different from the “standard” model are: (1) uses the experimentally based low pressure vaporization theory of O’Hanlon (A users’s guide to vacuum technology, 2003) for ablation, (2) uses velocity dependent functions fit from experimental data on heat transfer, luminosity and ionization efficiencies measured by Friichtenicht and Becker (NASA Special Publication 319: 53, 1973) for micron sized particles, (3) assumes a density and temperature dependence of the micrometeoroids and ablation product specific heats, (4) assumes a density and size dependent value for the thermal emissivity and (5) uses a unified synthesis of experimental data for the most important meteoroid elements and their oxides through least square fits (as functions of temperature, density, and/or melting point) of the tables of thermodynamic parameters given in Weast (CRC Handbook of Physics and Chemistry, 1984), Gray (American Institute of Physics Handbook, 1972), and Cox (Allen’s Astrophysical Quantities 2000). This utilization of mostly experimentally determined data is the main reason for

  10. USAGE OF INTERVAL CAUSE-EFFECT RELATIONSHIP COEFFICIENTS IN THE QUANTITATIVE MODEL OF STRATEGIC PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Dmitry M. Yershov

    2012-12-01

    Full Text Available This paper proposes the method to obtain values of the coefficients of cause-effect relationships between strategic objectives in the form of intervals and use them in solving the problem of the optimal allocation of organization’s resources. We suggest taking advantage of the interval analytical hierarchy process for obtaining the ntervals. The quantitative model of strategic performance developed by M. Hell, S. Vidučić and Ž. Garača is employed for finding the optimal resource allocation. The uncertainty originated in the optimization problem as a result of interval character of the cause-effect relationship coefficients is eliminated through the application of maximax and maximin criteria. It is shown that the problem of finding the optimal maximin, maximax, and compromise resource allocation can be represented as a mixed 0-1 linear programming problem. Finally, numerical example and directions for further research are given.

  11. Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering

    Science.gov (United States)

    2014-06-01

    used or translated use by a simulation tool for analysis. This appears to be linked to the software engineering tradition, where, in principle, if...to recognize these mistakes is limited. This is a point for engagement with subject matter experts. They can review the logic that the model is...Experiment (DOE) techniques such as Nearly Orthogonal Latin Hypercubes (NOLH) that have space filling properties as well as genetic algorithms that

  12. Construction and Experimental Validation of a Quantitative Kinetic Model of Nitric Oxide Stress in Enterohemorrhagic Escherichia coli O157:H7

    Directory of Open Access Journals (Sweden)

    Jonathan L. Robinson

    2016-02-01

    Full Text Available Enterohemorrhagic Escherichia coli (EHEC are responsible for large outbreaks of hemorrhagic colitis, which can progress to life-threatening hemolytic uremic syndrome (HUS due to the release of Shiga-like toxins (Stx. The presence of a functional nitric oxide (NO· reductase (NorV, which protects EHEC from NO· produced by immune cells, was previously found to correlate with high HUS incidence, and it was shown that NorV activity enabled prolonged EHEC survival and increased Stx production within macrophages. To enable quantitative study of EHEC NO· defenses and facilitate the development of NO·-potentiating therapeutics, we translated an existing kinetic model of the E. coli K-12 NO· response to an EHEC O157:H7 strain. To do this, we trained uncertain model parameters on measurements of [NO·] and [O2] in EHEC cultures, assessed parametric and prediction uncertainty with the use of a Markov chain Monte Carlo approach, and confirmed the predictive accuracy of the model with experimental data from genetic mutants lacking NorV or Hmp (NO· dioxygenase. Collectively, these results establish a methodology for the translation of quantitative models of NO· stress in model organisms to pathogenic sub-species, which is a critical step toward the application of these models for the study of infectious disease.

  13. Quantitative assessments of mantle flow models against seismic observations: Influence of uncertainties in mineralogical parameters

    Science.gov (United States)

    Schuberth, Bernhard S. A.

    2017-04-01

    One of the major challenges in studies of Earth's deep mantle is to bridge the gap between geophysical hypotheses and observations. The biggest dataset available to investigate the nature of mantle flow are recordings of seismic waveforms. On the other hand, numerical models of mantle convection can be simulated on a routine basis nowadays for earth-like parameters, and modern thermodynamic mineralogical models allow us to translate the predicted temperature field to seismic structures. The great benefit of the mineralogical models is that they provide the full non-linear relation between temperature and seismic velocities and thus ensure a consistent conversion in terms of magnitudes. This opens the possibility for quantitative assessments of the theoretical predictions. The often-adopted comparison between geodynamic and seismic models is unsuitable in this respect owing to the effects of damping, limited resolving power and non-uniqueness inherent to tomographic inversions. The most relevant issue, however, is related to wavefield effects that reduce the magnitude of seismic signals (e.g., traveltimes of waves), a phenomenon called wavefront healing. Over the past couple of years, we have developed an approach that takes the next step towards a quantitative assessment of geodynamic models and that enables us to test the underlying geophysical hypotheses directly against seismic observations. It is based solely on forward modelling and warrants a physically correct treatment of the seismic wave equation without theoretical approximations. Fully synthetic 3-D seismic wavefields are computed using a spectral element method for 3-D seismic structures derived from mantle flow models. This way, synthetic seismograms are generated independent of any seismic observations. Furthermore, through the wavefield simulations, it is possible to relate the magnitude of lateral temperature variations in the dynamic flow simulations directly to body-wave traveltime residuals. The

  14. [Near infrared spectroscopy quantitative analysis model based on incremental neural network with partial least squares].

    Science.gov (United States)

    Cao, Hui; Li, Da-Hang; Liu, Ling; Zhou, Yan

    2014-10-01

    This paper proposes an near infrared spectroscopy quantitative analysis model based on incremental neural network with partial least squares. The proposed model adopts the typical three-layer back-propagation neural network (BPNN), and the absorbance of different wavelengths and the component concentration are the inputs and the outputs, respectively. Partial least square (PLS) regression is performed on the history training samples firstly, and the obtained history loading matrices of the in- dependent variables and the dependent variables are used for determining the initial weights of the input layer and the output lay- er, respectively. The number of the hidden layer nodes is set as the number of the principal components of the independent varia- bles. After a set of new training samples is collected, PLS regression is performed on the combination dataset consisting of the new samples and the history loading matrices to calculate the new loading matrices. The history loading matrices and the new loading matrices are fused to obtain the new initial weights of the input layer and the output layer of the proposed model. Then the new samples are used for training the proposed mode to realize the incremental update. The proposed model is compared with PLS, BPNN, the BPNN based on PLS (PLS-BPNN) and the recursive PLS (RPLS) by using the spectra data of flue gas of nat- ural gas combustion. For the concentration prediction of the carbon dioxide in the flue gas, the root mean square error of predic- tion (RMSEP) of the proposed model are reduced by 27.27%, 58.12%, 19.24% and 14.26% than those of PLS, BPNN, PLS- BPNN and RPLS, respectively. For the concentration prediction of the carbon monoxide in the flue gas, the RMSEP of the pro- posed model are reduced by 20.65%, 24.69%, 18.54% and 19.42% than those of PLS, BPNN, PLS-BPNN and RPLS, re- spectively. For the concentration prediction of the methane in the flue gas, the RMSEP of the proposed model are reduced by 27

  15. Validation of Quantitative Structure-Activity Relationship (QSAR Model for Photosensitizer Activity Prediction

    Directory of Open Access Journals (Sweden)

    Sharifuddin M. Zain

    2011-11-01

    Full Text Available Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA method. Based on the method, r2 value, r2 (CV value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 µM to 7.04 µM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set.

  16. Constraining regional energy and hydrologic modeling of the cryosphere with quantitative retrievals from imaging spectroscopy

    Science.gov (United States)

    Skiles, M.; Painter, T. H.

    2016-12-01

    The timing and magnitude of snowmelt is determined by net solar radiation in most snow covered environments. Despite this well-established understanding of snow energy balance, measurements of snow reflectance and albedo are sparse or nonexistent. This is particularly relevant in mountainous regions, where snow accumulation and melt patterns influence both climate and hydrology. The Airborne Snow Observatory, a coupled lidar and imaging spectrometer platform, has been monitoring time series of snow water equivalent and snow reflectance over entire mountain basins since 2013. The ASO imaging spectrometer products build upon a legacy of algorithms for retrieving snow properties from the unique spectral signature of snow. Here, we present the full time series (2013-2016) of snow properties, including snow albedo, grain size, and impurity radiative forcing, across the Tuolumne River Basin, Sierra Nevada Mountains, CA. Additionally, we show that incorporating snow albedo into a snow energy balance model improves both the prediction of snow water equivalent and snowmelt timing. These results demonstrate the hydroclimatic modeling that is enabled by the quantitative retrievals uniquely available from imaging spectroscopy. As such, they have important implications for monitoring global snow and ice physical properties and regional and global climate modeling with spaceborne imaging spectroscopy, for example, NASA's planned HYSPIRI mission.

  17. Quantitative structure-retention relationship modeling of gas chromatographic retention times based on thermodynamic data.

    Science.gov (United States)

    Ebrahimi-Najafabadi, Heshmatollah; McGinitie, Teague M; Harynuk, James J

    2014-09-05

    Thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for 156 compounds comprising alkanes, alkyl halides and alcohols were determined for a 5% phenyl 95% methyl stationary phase. The determination of thermodynamic parameters relies on a Nelder-Mead simplex optimization to rapidly obtain the parameters. Two methodologies of external and leave one out cross validations were applied to assess the robustness of the estimations of thermodynamic parameters. The largest absolute errors in predicted retention time across all temperature ramps and all compounds were 1.5 and 0.3s for external and internal sets, respectively. The possibility of an in silico extension of the thermodynamic library was tested using a quantitative structure-retention relationship (QSRR) methodology. The estimated thermodynamic parameters were utilized to develop QSRR models. Individual partial least squares (PLS) models were developed for each of the three classes of the molecules. R(2) values for the test sets of all models across all temperature ramps were larger than 0.99 and the average of relative errors in retention time predictions of the test sets for alkanes, alcohols, and alkyl halides were 1.8%, 2.4%, and 2.5%, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. NetLand: quantitative modeling and visualization of Waddington's epigenetic landscape using probabilistic potential.

    Science.gov (United States)

    Guo, Jing; Lin, Feng; Zhang, Xiaomeng; Tanavde, Vivek; Zheng, Jie

    2017-05-15

    Waddington's epigenetic landscape is a powerful metaphor for cellular dynamics driven by gene regulatory networks (GRNs). Its quantitative modeling and visualization, however, remains a challenge, especially when there are more than two genes in the network. A software tool for Waddington's landscape has not been available in the literature. We present NetLand, an open-source software tool for modeling and simulating the kinetic dynamics of GRNs, and visualizing the corresponding Waddington's epigenetic landscape in three dimensions without restriction on the number of genes in a GRN. With an interactive and graphical user interface, NetLand can facilitate the knowledge discovery and experimental design in the study of cell fate regulation (e.g. stem cell differentiation and reprogramming). NetLand can run under operating systems including Windows, Linux and OS X. The executive files and source code of NetLand as well as a user manual, example models etc. can be downloaded from http://netland-ntu.github.io/NetLand/ . zhengjie@ntu.edu.sg. Supplementary data are available at Bioinformatics online.

  19. Effect of arterial deprivation on growing femoral epiphysis: Quantitative magnetic resonance imaging using a piglet model

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Jung Eun; Yoo, Won Joon; Kim, In One; Kim, Woo Sun; Choi, Young Hun [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2015-06-15

    To investigate the usefulness of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion MRI for the evaluation of femoral head ischemia. Unilateral femoral head ischemia was induced by selective embolization of the medial circumflex femoral artery in 10 piglets. All MRIs were performed immediately (1 hour) and after embolization (1, 2, and 4 weeks). Apparent diffusion coefficients (ADCs) were calculated for the femoral head. The estimated pharmacokinetic parameters (Kep and Ve from two-compartment model) and semi-quantitative parameters including peak enhancement, time-to-peak (TTP), and contrast washout were evaluated. The epiphyseal ADC values of the ischemic hip decreased immediately (1 hour) after embolization. However, they increased rapidly at 1 week after embolization and remained elevated until 4 weeks after embolization. Perfusion MRI of ischemic hips showed decreased epiphyseal perfusion with decreased Kep immediately after embolization. Signal intensity-time curves showed delayed TTP with limited contrast washout immediately post-embolization. At 1-2 weeks after embolization, spontaneous reperfusion was observed in ischemic epiphyses. The change of ADC (p = 0.043) and Kep (p = 0.043) were significantly different between immediate (1 hour) after embolization and 1 week post-embolization. Diffusion MRI and pharmacokinetic model obtained from the DCE-MRI are useful in depicting early changes of perfusion and tissue damage using the model of femoral head ischemia in skeletally immature piglets.

  20. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    Science.gov (United States)

    Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars

    2013-08-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.

  1. Background Studies for EXIST

    Science.gov (United States)

    Wilson, Colleen A.; Pendleton, G. N.; Fishman, G. J.

    2004-01-01

    We present results from a study of the trapped proton and electron background for several orbital inclinations and altitudes. This study includes time dependent effects. In addition we describe a 3 component cosmic background model developed at the University of Southampton, UK. The three components are cosmic diffuse gamma rays, atmospheric albedo gamma rays, and cosmic ray protons. We present examples of how this model was applied to BATSE and discuss its application to EXIST.

  2. Quantitative structure-activity relationship modeling of the toxicity of organothiophosphate pesticides to Daphnia magna and Cyprinus carpio

    NARCIS (Netherlands)

    Zvinavashe, E.; Du, T.; Griff, T.; Berg, van den J.H.J.; Soffers, A.E.M.F.; Vervoort, J.J.M.; Murk, A.J.; Rietjens, I.

    2009-01-01

    Within the REACH regulatory framework in the EU, quantitative structure-activity relationships (QSAR) models are expected to help reduce the number of animals used for experimental testing. The objective of this study was to develop QSAR models to describe the acute toxicity of organothiophosphate

  3. A quantitative evaluation of multiple biokinetic models using an assembled water phantom: A feasibility study.

    Directory of Open Access Journals (Sweden)

    Da-Ming Yeh

    Full Text Available This study examined the feasibility of quantitatively evaluating multiple biokinetic models and established the validity of the different compartment models using an assembled water phantom. Most commercialized phantoms are made to survey the imaging system since this is essential to increase the diagnostic accuracy for quality assurance. In contrast, few customized phantoms are specifically made to represent multi-compartment biokinetic models. This is because the complicated calculations as defined to solve the biokinetic models and the time-consuming verifications of the obtained solutions are impeded greatly the progress over the past decade. Nevertheless, in this work, five biokinetic models were separately defined by five groups of simultaneous differential equations to obtain the time-dependent radioactive concentration changes inside the water phantom. The water phantom was assembled by seven acrylic boxes in four different sizes, and the boxes were linked to varying combinations of hoses to signify the multiple biokinetic models from the biomedical perspective. The boxes that were connected by hoses were then regarded as a closed water loop with only one infusion and drain. 129.1±24.2 MBq of Tc-99m labeled methylene diphosphonate (MDP solution was thoroughly infused into the water boxes before gamma scanning; then the water was replaced with de-ionized water to simulate the biological removal rate among the boxes. The water was driven by an automatic infusion pump at 6.7 c.c./min, while the biological half-life of the four different-sized boxes (64, 144, 252, and 612 c.c. was 4.8, 10.7, 18.8, and 45.5 min, respectively. The five models of derived time-dependent concentrations for the boxes were estimated either by a self-developed program run in MATLAB or by scanning via a gamma camera facility. Either agreement or disagreement between the practical scanning and the theoretical prediction in five models was thoroughly discussed. The

  4. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    Science.gov (United States)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  5. Invasive growth of Saccharomyces cerevisiae depends on environmental triggers: a quantitative model.

    Science.gov (United States)

    Zupan, Jure; Raspor, Peter

    2010-04-01

    In this contribution, the influence of various physicochemical factors on Saccharomyces cerevisiae invasive growth is examined quantitatively. Agar-invasion assays are generally applied for in vitro studies on S. cerevisiae invasiveness, the phenomenon observed as a putative virulence trait in this clinically more and more concerning yeast. However, qualitative agar-invasion assays, used until now, strongly limit the feasibility and interpretation of analyses and therefore needed to be improved. Besides, knowledge in this field concerning the physiology of invasive growth, influenced by stress conditions related to the human alimentary tract and food, is poor and should be expanded. For this purpose, a quantitative agar-invasion assay, presented in our previous work, was applied in this contribution to clarify the significance of the stress factors controlling the adhesion and invasion of the yeast in greater detail. Ten virulent and non-virulent S. cerevisiae strains were assayed at various temperatures, pH values, nutrient starvation, modified atmosphere, and different concentrations of NaCl, CaCl2 and preservatives. With the use of specific parameters, like a relative invasion, eight invasive growth models were hypothesized, which enabled intelligible interpretation of the results. A strong preference for invasive growth (meaning high relative invasion) was observed when the strains were grown on nitrogen- and glucose-depleted media. A significant increase in the invasion of the strains was also determined at temperatures typical for human fever (37-39 degrees C). On the other hand, a strong repressive effect on invasion was found in the presence of salts, anoxia and some preservatives. Copyright 2010 John Wiley & Sons, Ltd.

  6. Lunar Rover Model - Reengineering of an Existing Mobile Platform towards the realization of a Rover Autonomy Testbed

    NARCIS (Netherlands)

    Gounaris, Alexandros Frantzis; Poulakis, Pantelis; Chautems, Christophe; Raffaela, Carloni; Stramigioli, Stefano

    2011-01-01

    The Automation & Robotics Section of the European Space Agency (ESA) is developing a platform for investigation of different levels of autonomy of planetary rovers. Within this scope a physical flight model is required and the Lunar Rover Model (LRM) is chosen. The LRM is a 4 wheel, medium-scale

  7. Bifurcation Analysis of an Existing Mathematical Model Reveals Novel Treatment Strategies and Suggests Potential Cure for Type 1 Diabetes

    DEFF Research Database (Denmark)

    Nielsen, Kenneth Hagde Mandrup; Ottesen, Johnny T.; Pociot, Flemming

    2014-01-01

    Type 1 diabetes is a disease with serious personal and socioeconomic consequences that has attracted the attention of modellers recently. But as models of this disease tend to be complicated, there has been only limited mathematical analysis to date. Here we address this problem by providing a bi...

  8. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  9. A probabilistic quantitative risk assessment model for the long-term work zone crashes.

    Science.gov (United States)

    Meng, Qiang; Weng, Jinxian; Qu, Xiaobo

    2010-11-01

    Work zones especially long-term work zones increase traffic conflicts and cause safety problems. Proper casualty risk assessment for a work zone is of importance for both traffic safety engineers and travelers. This paper develops a novel probabilistic quantitative risk assessment (QRA) model to evaluate the casualty risk combining frequency and consequence of all accident scenarios triggered by long-term work zone crashes. The casualty risk is measured by the individual risk and societal risk. The individual risk can be interpreted as the frequency of a driver/passenger being killed or injured, and the societal risk describes the relation between frequency and the number of casualties. The proposed probabilistic QRA model consists of the estimation of work zone crash frequency, an event tree and consequence estimation models. There are seven intermediate events--age (A), crash unit (CU), vehicle type (VT), alcohol (AL), light condition (LC), crash type (CT) and severity (S)--in the event tree. Since the estimated value of probability for some intermediate event may have large uncertainty, the uncertainty can thus be characterized by a random variable. The consequence estimation model takes into account the combination effects of speed and emergency medical service response time (ERT) on the consequence of work zone crash. Finally, a numerical example based on the Southeast Michigan work zone crash data is carried out. The numerical results show that there will be a 62% decrease of individual fatality risk and 44% reduction of individual injury risk if the mean travel speed is slowed down by 20%. In addition, there will be a 5% reduction of individual fatality risk and 0.05% reduction of individual injury risk if ERT is reduced by 20%. In other words, slowing down speed is more effective than reducing ERT in the casualty risk mitigation. 2010 Elsevier Ltd. All rights reserved.

  10. Deficiencies in quantitative precipitation forecasts. Sensitivity studies using the COSMO model

    Energy Technology Data Exchange (ETDEWEB)

    Dierer, Silke [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Meteotest, Bern (Switzerland); Arpagaus, Marco [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Seifert, Axel [Deutscher Wetterdienst, Offenbach (Germany); Avgoustoglou, Euripides [Hellenic National Meteorological Service, Hellinikon (Greece); Dumitrache, Rodica [National Meteorological Administration, Bucharest (Romania); Grazzini, Federico [Agenzia Regionale per la Protezione Ambientale Emilia Romagna, Bologna (Italy); Mercogliano, Paola [Italian Aerospace Research Center, Capua (Italy); Milelli, Massimo [Agenzia Regionale per la Protezione Ambientale Piemonte, Torino (Italy); Starosta, Katarzyna [Inst. of Meteorology and Water Management, Warsaw (Poland)

    2009-12-15

    The quantitative precipitation forecast (QPF) of the COSMO model, like of other models, reveals some deficiencies. The aim of this study is to investigate which physical and numerical schemes have the strongest impact on QPF and, thus, have the highest potential for improving QPF. Test cases are selected that are meant to reflect typical forecast errors in different countries. The 13 test cases fall into two main groups: overestimation of stratiform precipitation (6 cases) and underestimation of convective precipitation (5 cases). 22 sensitivity experiments predominantly regarding numerical and physical schemes are performed. The area averaged 24 h precipitation sums arc evaluated. The results show that the strongest impact on QPF is caused by changes of the initial atmospheric humidity and by using the Kain-Fritsch/Bechtold convection scheme instead of the Tiedtke scheme. Both sensitivity experiments change the area averaged precipitation in the range of 30-35%. This clearly shows that improved simulation of atmospheric water vapour is of utmost importance to achieve better precipitation forecasts. Significant changes are also caused by using the Runge-Kutta time integration scheme instead of the Leapfrog scheme, by applying a modified warm rain and snow physics scheme or a modified Tiedtke convection scheme. The fore-mentioned changes result in differences of area averaged precipitation of roughly 20%. Only for Greek lest cases, which all have a strong influence from the sea, the heat and moisture exchange between surface and atmosphere is of great importance and can cause changes of up to 20%. (orig.)

  11. Establishment of Quantitative Severity Evaluation Model for Spinal Cord Injury by Metabolomic Fingerprinting

    Science.gov (United States)

    Yang, Hao; Cohen, Mitchell Jay; Chen, Wei; Sun, Ming-Wei; Lu, Charles Damien

    2014-01-01

    Spinal cord injury (SCI) is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR) screening, we identified 15 metabolites that made up an “Eigen-metabolome” capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD–NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use. PMID:24727691

  12. Establishment of quantitative severity evaluation model for spinal cord injury by metabolomic fingerprinting.

    Science.gov (United States)

    Peng, Jin; Zeng, Jun; Cai, Bin; Yang, Hao; Cohen, Mitchell Jay; Chen, Wei; Sun, Ming-Wei; Lu, Charles Damien; Jiang, Hua

    2014-01-01

    Spinal cord injury (SCI) is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR) screening, we identified 15 metabolites that made up an "Eigen-metabolome" capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD-NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use.

  13. Establishment of quantitative severity evaluation model for spinal cord injury by metabolomic fingerprinting.

    Directory of Open Access Journals (Sweden)

    Jin Peng

    Full Text Available Spinal cord injury (SCI is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR screening, we identified 15 metabolites that made up an "Eigen-metabolome" capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD-NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use.

  14. Gas chromatographic quantitative analysis of methanol in wine: operative conditions, optimization and calibration model choice.

    Science.gov (United States)

    Caruso, Rosario; Gambino, Grazia Laura; Scordino, Monica; Sabatino, Leonardo; Traulo, Pasqualino; Gagliano, Giacomo

    2011-12-01

    The influence of the wine distillation process on methanol content has been determined by quantitative analysis using gas chromatographic flame ionization (GC-FID) detection. A comparative study between direct injection of diluted wine and injection of distilled wine was performed. The distillation process does not affect methanol quantification in wines in proportions higher than 10%. While quantification performed on distilled samples gives more reliable results, a screening method for wine injection after a 1:5 water dilution could be employed. The proposed technique was found to be a compromise between the time consuming distillation process and direct wine injection. In the studied calibration range, the stability of the volatile compounds in the reference solution is concentration-dependent. The stability is higher in the less concentrated reference solution. To shorten the operation time, a stronger temperature ramp and carrier flow rate was employed. With these conditions, helium consumption and column thermal stress were increased. However, detection limits, calibration limits, and analytical method performances are not affected substantially by changing from normal to forced GC conditions. Statistical data evaluation were made using both ordinary (OLS) and bivariate least squares (BLS) calibration models. Further confirmation was obtained that limit of detection (LOD) values, calculated according to the 3sigma approach, are lower than the respective Hubaux-Vos (H-V) calculation method. H-V LOD depends upon background noise, calibration parameters and the number of reference standard solutions employed in producing the calibration curve. These remarks are confirmed by both calibration models used.

  15. Fechner's law in metacognition: A quantitative model of visual working memory confidence.

    Science.gov (United States)

    van den Berg, Ronald; Yoo, Aspen H; Ma, Wei Ji

    2017-03-01

    Although visual working memory (VWM) has been studied extensively, it is unknown how people form confidence judgments about their memories. Peirce (1878) speculated that Fechner's law-which states that sensation is proportional to the logarithm of stimulus intensity-might apply to confidence reports. Based on this idea, we hypothesize that humans map the precision of their VWM contents to a confidence rating through Fechner's law. We incorporate this hypothesis into the best available model of VWM encoding and fit it to data from a delayed-estimation experiment. The model provides an excellent account of human confidence rating distributions as well as the relation between performance and confidence. Moreover, the best-fitting mapping in a model with a highly flexible mapping closely resembles the logarithmic mapping, suggesting that no alternative mapping exists that accounts better for the data than Fechner's law. We propose a neural implementation of the model and find that this model also fits the behavioral data well. Furthermore, we find that jointly fitting memory errors and confidence ratings boosts the power to distinguish previously proposed VWM encoding models by a factor of 5.99 compared to fitting only memory errors. Finally, we show that Fechner's law also accounts for metacognitive judgments in a word recognition memory task, which is a first indication that it may be a general law in metacognition. Our work presents the first model to jointly account for errors and confidence ratings in VWM and could lay the groundwork for understanding the computational mechanisms of metacognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. DEVELOPMENT OF MODEL FOR QUANTITATIVE EVALUATION OF DYNAMICALLY STABLE FORMS OF RIVER CHANNELS

    Directory of Open Access Journals (Sweden)

    O. V. Zenkin

    2017-01-01

    systems. The determination of regularities of development of bed forms and quantitative relations between their parameters are based on modeling the “right” forms of riverbed.The research has resulted in establishing and testing methodology of simulation modeling, which allows one to identify dynamically stable form of riverbed. 

  17. Is it primary neuropsychiatric systemic lupus erythematosus? Performance of existing attribution models using physician judgment as the gold standard.

    Science.gov (United States)

    Fanouriakis, Antonis; Pamfil, Cristina; Rednic, Simona; Sidiropoulos, Prodromos; Bertsias, George; Boumpas, Dimitrios T

    2016-01-01

    Models for the attribution of neuropsychiatric manifestations to systemic lupus erythematosus (NPSLE) that incorporate timing and type of manifestation, exclusion/confounding or favouring factors have been proposed. We tested their diagnostic performance against expert physician judgment. SLE patients with neuropsychiatric manifestations were identified through retrospective chart review. Manifestations were classified according to physician judgment as attributed to SLE, not attributed or uncertain. Results were compared against the Systemic Lupus International Collaborating Clinics (SLICC) attribution models A and B, and one introduced by the Italian Study Group on NPSLE. 191 patients experienced a total 242 neuropsychiatric manifestations, 136 of which were attributed to SLE according to physician. Both SLICC models showed high specificity (96.2% and 79.2% for model A and B, respectively) but low sensitivity (22.8% and 34.6%, respectively) against physician judgment. Exclusion of cases of headache, anxiety disorders, mild mood and cognitive disorders and polyneuropathy without electrophysiologic confirmation led to modest increases in sensitivity (27.7% and 42.0% for SLICC models A and B, respectively) and reductions in specificity (94.8% and 65.5%, respectively). The Italian Group model showed good accuracy in NPSLE attribution with an area under the curve of the receiver operating characteristics analysis of 0.862; values ≥7 showed the best combination of sensitivity and specificity (82.4% and 82.9%, respectively). Attribution models can be useful in NPSLE diagnosis in routine clinical practice and their performance is superior in major neuropsychiatric manifestations. The Italian Study Group model is accurate, with values ≥7 showing the best combination of sensitivity and specificity.

  18. Assessment of existing and new modeling strategies for the simulation of OH* radiation in high-temperature flames

    Science.gov (United States)

    Fiala, Thomas; Sattelmayer, Thomas

    2016-03-01

    Four methods to calculate OH* radiation from numerical simulations of flames above 2700 K are presented: (1) A state-of-the-art chemiluminescence model: OH* emission is assumed to be proportional to the concentration of an excited sub-species OH*. OH* is implemented in the detailed chemical reaction mechanism. (2) A spectral model: emission and absorption are computed and integrated on a line-by-line basis from the HITRAN data base. (3) An equilibrium filtered radiation model: it provides a very simple way to compute OH* emissivity in a post-processing step. This is a simplification of the chemiluminescence model suitable for high-temperature flames. (4) An extension of the latter model to approximate the influence of self-absorption. The advantages and limitations of all approaches are discussed from a physics-based perspective. Their performances are assessed in a laminar hydrogen-oxygen jet flame at varying pressure. The importance of self-absorption for OH* radiation is analyzed and emphasized. Recommendations for the model selection are given.

  19. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    Science.gov (United States)

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom

  20. Estimation of Low Quantity Genes: A Hierarchical Model for Analyzing Censored Quantitative Real-Time PCR Data

    OpenAIRE

    Boyer, Tim C.; Tim Hanson; Singer, Randall S.

    2013-01-01

    Analysis of gene quantities measured by quantitative real-time PCR (qPCR) can be complicated by observations that are below the limit of quantification (LOQ) of the assay. A hierarchical model estimated using MCMC methods was developed to analyze qPCR data of genes with observations that fall below the LOQ (censored observations). Simulated datasets with moderate to very high levels of censoring were used to assess the performance of the model; model results were compared to approaches that r...

  1. Interdiffusion of the aluminum magnesium system. Quantitative analysis and numerical model; Interdiffusion des Aluminium-Magnesium-Systems. Quantitative Analyse und numerische Modellierung

    Energy Technology Data Exchange (ETDEWEB)

    Seperant, Florian

    2012-03-21

    Aluminum coatings are a promising approach to protect magnesium alloys against corrosion and thereby making them accessible to a variety of technical applications. Thermal treatment enhances the adhesion of the aluminium coating on magnesium by interdiffusion. For a deeper understanding of the diffusion process at the interface, a quantitative description of the Al-Mg system is necessary. On the basis of diffusion experiments with infinite reservoirs of aluminum and magnesium, the interdiffusion coefficients of the intermetallic phases of the Al-Mg-system are calculated with the Sauer-Freise method for the first time. To solve contradictions in the literature concerning the intrinsic diffusion coefficients, the possibility of a bifurcation of the Kirkendall plane is considered. Furthermore, a physico-chemical description of interdiffusion is provided to interpret the observed phase transitions. The developed numerical model is based on a temporally varied discretization of the space coordinate. It exhibits excellent quantitative agreement with the experimentally measured concentration profile. This confirms the validity of the obtained diffusion coefficients. Moreover, the Kirkendall shift in the Al-Mg system is simulated for the first time. Systems with thin aluminum coatings on magnesium also exhibit a good correlation between simulated and experimental concentration profiles. Thus, the diffusion coefficients are also valid for Al-coated systems. Hence, it is possible to derive parameters for a thermal treatment by simulation, resulting in an optimized modification of the magnesium surface for technical applications.

  2. Quantitative Evaluation of Models for Solvent-based, On-column Focusing in Liquid Chromatography

    Science.gov (United States)

    Groskreutz, Stephen R.; Weber, Stephen G.

    2015-01-01

    On-column focusing or preconcentration is a well-known approach to increase concentration sensitivity by generating transient conditions during the injection that result in high solute retention. Preconcentration results from two phenomena: 1) solutes are retained as they enter the column. Their velocities are k′-dependent and lower than the mobile phase velocity and 2) zones are compressed due to the step-gradient resulting from the higher elution strength mobile phase passing through the solute zones. Several workers have derived the result that the ratio of the eluted zone width (in time) to the injected time width is the ratio k2/k1 where k1 is the retention factor of a solute in the sample solvent and k2 is the retention factor in the mobile phase (isocratic). Mills et al. proposed a different factor. To date, neither of the models has been adequately tested. The goal of this work was to evaluate quantitatively these two models. We used n-alkyl esters of p-hydroxybenzoic acid (parabens) as solutes. By making large injections to create obvious volume overload, we could measure accurately the ratio of widths (eluted/injected) over a range of values of k1 and k2. The Mills et al. model does not fit the data. The data are in general agreement with the factor k2/k1, but focusing is about 10% better than the prediction. We attribute the extra focusing to the fact that the second, compression, phenomenon provides a narrower zone than that expected for the passage of a step gradient through the zone. PMID:26210110

  3. A semi-quantitative approach for modelling crop response to soil fertility: Evaluation of the AquaCrop procedure

    OpenAIRE

    Van Gaelen, Hanne; Tsegay, Alemtsehay; Delbecque, Nele; Shrestha, Nirman; Garcia, Magali; Fajardo, Hector; Miranda, Roberto; Vanuytrecht, Eline; Abrha, Berhanu; Diels, Jan; Raes, Dirk

    2015-01-01

    Most crop models make use of a nutrient balance approach for modelling crop response to soil fertility. To counter the vast input data requirements that are typical of these models, the crop water productivity model AquaCrop adopts a semi-quantitative approach. Instead of providing nutrient levels, users of the model provide the soil fertility level as a model input. This level is expressed in terms of the expected impact on crop biomass production, which can be observed in the field or obtai...

  4. A quantitative exposure model simulating human norovirus transmission during preparation of deli sandwiches.

    Science.gov (United States)

    Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke

    2015-03-02

    Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The

  5. Mapping of quantitative trait loci by using genetic markers : an overview of biometrical models used

    NARCIS (Netherlands)

    Jansen, Ritsert C.

    1994-01-01

    In crop plants quantitative variation is a feature of many important traits, such as yield, quality or disease resistance. Means of analyzing quantitative variation and especially of uncovering its potential genetic basis are therefore of prime importance for breeding purposes. It has been

  6. MHD-model for low-frequency waves in a tokamak with toroidal plasma rotation and problem of existence of global geodesic acoustic modes

    Energy Technology Data Exchange (ETDEWEB)

    Lakhin, V. P.; Sorokina, E. A., E-mail: sorokina.ekaterina@gmail.com, E-mail: vilkiae@gmail.com; Ilgisonis, V. I. [National Research Centre Kurchatov Institute (Russian Federation); Konovaltseva, L. V. [Peoples’ Friendship University of Russia (Russian Federation)

    2015-12-15

    A set of reduced linear equations for the description of low-frequency perturbations in toroidally rotating plasma in axisymmetric tokamak is derived in the framework of ideal magnetohydrodynamics. The model suitable for the study of global geodesic acoustic modes (GGAMs) is designed. An example of the use of the developed model for derivation of the integral conditions for GGAM existence and of the corresponding dispersion relation is presented. The paper is dedicated to the memory of academician V.D. Shafranov.

  7. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    Science.gov (United States)

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  8. Post-Hoc Pattern-Oriented Testing and Tuning of an Existing Large Model: Lessons from the Field Vole

    DEFF Research Database (Denmark)

    Topping, Christopher John; Dalkvist, Trine; Grimm, Volker

    2012-01-01

    environment closely. We therefore conclude that post-hoc POM is a useful and viable way to test a highly complex simulation model, but also warn against the dangers of over-fitting to real world patterns that lack details in their explanatory driving factors. To overcome some of these obstacles we suggest...

  9. Modeling development and quantitative trait mapping reveal independent genetic modules for leaf size and shape.

    Science.gov (United States)

    Baker, Robert L; Leong, Wen Fung; Brock, Marcus T; Markelz, R J Cody; Covington, Michael F; Devisetty, Upendra K; Edwards, Christine E; Maloof, Julin; Welch, Stephen; Weinig, Cynthia

    2015-10-01

    Improved predictions of fitness and yield may be obtained by characterizing the genetic controls and environmental dependencies of organismal ontogeny. Elucidating the shape of growth curves may reveal novel genetic controls that single-time-point (STP) analyses do not because, in theory, infinite numbers of growth curves can result in the same final measurement. We measured leaf lengths and widths in Brassica rapa recombinant inbred lines (RILs) throughout ontogeny. We modeled leaf growth and allometry as function valued traits (FVT), and examined genetic correlations between these traits and aspects of phenology, physiology, circadian rhythms and fitness. We used RNA-seq to construct a SNP linkage map and mapped trait quantitative trait loci (QTL). We found genetic trade-offs between leaf size and growth rate FVT and uncovered differences in genotypic and QTL correlations involving FVT vs STPs. We identified leaf shape (allometry) as a genetic module independent of length and width and identified selection on FVT parameters of development. Leaf shape is associated with venation features that affect desiccation resistance. The genetic independence of leaf shape from other leaf traits may therefore enable crop optimization in leaf shape without negative effects on traits such as size, growth rate, duration or gas exchange. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  10. Quantitative studies of animal colour constancy: using the chicken as model

    Science.gov (United States)

    2016-01-01

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations. PMID:27170714

  11. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models.

    Science.gov (United States)

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-07-15

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds. Copyright © 2010. Published by Elsevier Inc.

  12. Quantitative models of persistence and relapse from the perspective of behavioral momentum theory: Fits and misfits.

    Science.gov (United States)

    Nevin, John A; Craig, Andrew R; Cunningham, Paul J; Podlesnik, Christopher A; Shahan, Timothy A; Sweeney, Mary M

    2017-08-01

    We review quantitative accounts of behavioral momentum theory (BMT), its application to clinical treatment, and its extension to post-intervention relapse of target behavior. We suggest that its extension can account for relapse using reinstatement and renewal models, but that its application to resurgence is flawed both conceptually and in its failure to account for recent data. We propose that the enhanced persistence of target behavior engendered by alternative reinforcers is limited to their concurrent availability within a distinctive stimulus context. However, a failure to find effects of stimulus-correlated reinforcer rates in a Pavlovian-to-Instrumental Transfer (PIT) paradigm challenges even a straightforward Pavlovian account of alternative reinforcer effects. BMT has been valuable in understanding basic research findings and in guiding clinical applications and accounting for their data, but alternatives are needed that can account more effectively for resurgence while encompassing basic data on resistance to change as well as other forms of relapse. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Quantitative profiling of brain lipid raft proteome in a mouse model of fragile X syndrome.

    Science.gov (United States)

    Kalinowska, Magdalena; Castillo, Catherine; Francesconi, Anna

    2015-01-01

    Fragile X Syndrome, a leading cause of inherited intellectual disability and autism, arises from transcriptional silencing of the FMR1 gene encoding an RNA-binding protein, Fragile X Mental Retardation Protein (FMRP). FMRP can regulate the expression of approximately 4% of brain transcripts through its role in regulation of mRNA transport, stability and translation, thus providing a molecular rationale for its potential pleiotropic effects on neuronal and brain circuitry function. Several intracellular signaling pathways are dysregulated in the absence of FMRP suggesting that cellular deficits may be broad and could result in homeostatic changes. Lipid rafts are specialized regions of the plasma membrane, enriched in cholesterol and glycosphingolipids, involved in regulation of intracellular signaling. Among transcripts targeted by FMRP, a subset encodes proteins involved in lipid biosynthesis and homeostasis, dysregulation of which could affect the integrity and function of lipid rafts. Using a quantitative mass spectrometry-based approach we analyzed the lipid raft proteome of Fmr1 knockout mice, an animal model of Fragile X syndrome, and identified candidate proteins that are differentially represented in Fmr1 knockout mice lipid rafts. Furthermore, network analysis of these candidate proteins reveals connectivity between them and predicts functional connectivity with genes encoding components of myelin sheath, axonal processes and growth cones. Our findings provide insight to aid identification of molecular and cellular dysfunctions arising from Fmr1 silencing and for uncovering shared pathologies between Fragile X syndrome and other autism spectrum disorders.

  14. Model-independent quantitative measurement of nanomechanical oscillator vibrations using electron-microscope linescans

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Huan; Fenton, J. C.; Chiatti, O. [London Centre for Nanotechnology, University College London, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Warburton, P. A. [London Centre for Nanotechnology, University College London, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Electronic and Electrical Engineering, University College London, Torrington Place, London WC1E 7JE (United Kingdom)

    2013-07-15

    Nanoscale mechanical resonators are highly sensitive devices and, therefore, for application as highly sensitive mass balances, they are potentially superior to micromachined cantilevers. The absolute measurement of nanoscale displacements of such resonators remains a challenge, however, since the optical signal reflected from a cantilever whose dimensions are sub-wavelength is at best very weak. We describe a technique for quantitative analysis and fitting of scanning-electron microscope (SEM) linescans across a cantilever resonator, involving deconvolution from the vibrating resonator profile using the stationary resonator profile. This enables determination of the absolute amplitude of nanomechanical cantilever oscillations even when the oscillation amplitude is much smaller than the cantilever width. This technique is independent of any model of secondary-electron emission from the resonator and is, therefore, applicable to resonators with arbitrary geometry and material inhomogeneity. We demonstrate the technique using focussed-ion-beam–deposited tungsten cantilevers of radius ∼60–170 nm inside a field-emission SEM, with excitation of the cantilever by a piezoelectric actuator allowing measurement of the full frequency response. Oscillation amplitudes approaching the size of the primary electron-beam can be resolved. We further show that the optimum electron-beam scan speed is determined by a compromise between deflection of the cantilever at low scan speeds and limited spatial resolution at high scan speeds. Our technique will be an important tool for use in precise characterization of nanomechanical resonator devices.

  15. Synthesis, photodynamic activity, and quantitative structure-activity relationship modelling of a series of BODIPYs.

    Science.gov (United States)

    Caruso, Enrico; Gariboldi, Marzia; Sangion, Alessandro; Gramatica, Paola; Banfi, Stefano

    2017-02-01

    Here we report the synthesis of eleven new BODIPYs (14-24) characterized by the presence of an aromatic ring on the 8 (meso) position and of iodine atoms on the pyrrolic 2,6 positions. These molecules, together with twelve BODIPYs already reported by us (1-12), represent a large panel of BODIPYs showing different atoms or groups as substituent of the aromatic moiety. Two physico-chemical features ((1)O2 generation rate and lipophilicity), which can play a fundamental role in the outcome as photosensitizers, have been studied. The in vitro photo-induced cell-killing efficacy of 23 PSs was studied on the SKOV3 cell line treating the cells for 24h in the dark then irradiating for 2h with a green LED device (fluence 25.2J/cm(2)). The cell-killing efficacy was assessed with the MTT test and compared with that one of meso un-substituted compound (13). In order to understand the possible effect of the substituents, a predictive quantitative structure-activity relationship (QSAR) regression model, based on theoretical holistic molecular descriptors, was developed. The results clearly indicate that the presence of an aromatic ring is fundamental for an excellent photodynamic response, whereas the electronic effects and the position of the substituents on the aromatic ring do not influence the photodynamic efficacy. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Model-based estimation of quantitative ultrasound variables at the proximal femur.

    Science.gov (United States)

    Dencks, Stefanie; Barkmann, Reinhard; Padilla, Frédéric; Laugier, Pascal; Schmitz, Georg; Glüer, Claus-C

    2008-01-01

    To improve the prediction of the osteoporotic fracture risk at the proximal femur we are developing a scanner for quantitative ultrasound (QUS) measurements at this site. Due to multipath transmission in this complex shaped bone, conventional signal processing techniques developed for QUS measurements at peripheral sites frequently fail. Therefore, we propose a model-based estimation of the QUS variables and analyze the performance of the new algorithm. Applying the proposed method to QUS scans of excised proximal femurs increased the fraction of evaluable signals from approx. 60% (using conventional algorithms) to 97%. The correlation of the standard QUS variables broadband ultrasound attenuation (BUA) and speed of sound (SOS) with the established variable bone mineral density (BMD) reported in previous studies is maintained (BUA/BMD: r(2) = 0.69; SOS/BMD: r(2) = 0.71; SOS+BUA/BMD: r(2) = 0.88). Additionally, different wave types could be clearly detected and characterized in the trochanteric region. The ability to separate superimposed signals with this approach opens up further diagnostic potential for evaluating waves of different sound paths and wave types through bone tissue.

  17. Quantitative Proteomic Profiling of Low-Dose Ionizing Radiation Effects in a Human Skin Model

    Directory of Open Access Journals (Sweden)

    Shawna M. Hengel

    2014-07-01

    Full Text Available To assess responses to low-dose ionizing radiation (LD-IR exposures potentially encountered during medical diagnostic procedures, nuclear accidents or terrorist acts, a quantitative proteomic approach was used to identify changes in protein abundance in a reconstituted human skin tissue model treated with 0.1 Gy of ionizing radiation. To improve the dynamic range of the assay, subcellular fractionation was employed to remove highly abundant structural proteins and to provide insight into radiation-induced alterations in protein localization. Relative peptide quantification across cellular fractions, control and irradiated samples was performing using 8-plex iTRAQ labeling followed by online two-dimensional nano-scale liquid chromatography and high resolution MS/MS analysis. A total of 107 proteins were detected with statistically significant radiation-induced change in abundance (>1.5 fold and/or subcellular localization compared to controls. The top biological pathways identified using bioinformatics include organ development, anatomical structure formation and the regulation of actin cytoskeleton. From the proteomic data, a change in proteolytic processing and subcellular localization of the skin barrier protein, filaggrin, was identified, and the results were confirmed by western blotting. This data indicate post-transcriptional regulation of protein abundance, localization and proteolytic processing playing an important role in regulating radiation response in human tissues.

  18. Quantitative profiling of brain lipid raft proteome in a mouse model of fragile X syndrome.

    Directory of Open Access Journals (Sweden)

    Magdalena Kalinowska

    Full Text Available Fragile X Syndrome, a leading cause of inherited intellectual disability and autism, arises from transcriptional silencing of the FMR1 gene encoding an RNA-binding protein, Fragile X Mental Retardation Protein (FMRP. FMRP can regulate the expression of approximately 4% of brain transcripts through its role in regulation of mRNA transport, stability and translation, thus providing a molecular rationale for its potential pleiotropic effects on neuronal and brain circuitry function. Several intracellular signaling pathways are dysregulated in the absence of FMRP suggesting that cellular deficits may be broad and could result in homeostatic changes. Lipid rafts are specialized regions of the plasma membrane, enriched in cholesterol and glycosphingolipids, involved in regulation of intracellular signaling. Among transcripts targeted by FMRP, a subset encodes proteins involved in lipid biosynthesis and homeostasis, dysregulation of which could affect the integrity and function of lipid rafts. Using a quantitative mass spectrometry-based approach we analyzed the lipid raft proteome of Fmr1 knockout mice, an animal model of Fragile X syndrome, and identified candidate proteins that are differentially represented in Fmr1 knockout mice lipid rafts. Furthermore, network analysis of these candidate proteins reveals connectivity between them and predicts functional connectivity with genes encoding components of myelin sheath, axonal processes and growth cones. Our findings provide insight to aid identification of molecular and cellular dysfunctions arising from Fmr1 silencing and for uncovering shared pathologies between Fragile X syndrome and other autism spectrum disorders.

  19. Quantitative prediction of cellular metabolism with constraint-based models: the COBRA Toolbox.

    Science.gov (United States)

    Becker, Scott A; Feist, Adam M; Mo, Monica L; Hannum, Gregory; Palsson, Bernhard Ø; Herrgard, Markus J

    2007-01-01

    The manner in which microorganisms utilize their metabolic processes can be predicted using constraint-based analysis of genome-scale metabolic networks. Herein, we present the constraint-based reconstruction and analysis toolbox, a software package running in the Matlab environment, which allows for quantitative prediction of cellular behavior using a constraint-based approach. Specifically, this software allows predictive computations of both steady-state and dynamic optimal growth behavior, the effects of gene deletions, comprehensive robustness analyses, sampling the range of possible cellular metabolic states and the determination of network modules. Functions enabling these calculations are included in the toolbox, allowing a user to input a genome-scale metabolic model distributed in Systems Biology Markup Language format and perform these calculations with just a few lines of code. The results are predictions of cellular behavior that have been verified as accurate in a growing body of research. After software installation, calculation time is minimal, allowing the user to focus on the interpretation of the computational results.

  20. Quantitative studies of animal colour constancy: using the chicken as model.

    Science.gov (United States)

    Olsson, Peter; Wilby, David; Kelber, Almut

    2016-05-11

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations. © 2016 The Author(s).

  1. Existing air sparging model and literature review for the development of an air sparging optimization decision tool

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The objectives of this Report are two-fold: (1) to provide overviews of the state-of-the-art and state-of-the-practice with respect to air sparging technology, air sparging models and related or augmentation technologies (e.g., soil vapor extraction); and (2) to provide the basis for the development of the conceptual Decision Tool. The Project Team conducted an exhaustive review of available literature. The complete listing of the documents, numbering several hundred and reviewed as a part of this task, is included in Appendix A. Even with the large amount of material written regarding the development and application of air sparging, there still are significant gaps in the technical community`s understanding of the remediation technology. The results of the literature review are provided in Section 2. In Section 3, an overview of seventeen conceptual, theoretical, mathematical and empirical models is presented. Detailed descriptions of each of the models reviewed is provided in Appendix B. Included in Appendix D is a copy of the questionnaire used to compile information about the models. The remaining sections of the document reflect the analysis and synthesis of the information gleaned during the literature and model reviews. The results of these efforts provide the basis for development of the decision tree and conceptual decision tool for determining applicability and optimization of air sparging. The preliminary decision tree and accompanying information provided in Section 6 describe a three-tiered approach for determining air sparging applicability: comparison with established scenarios; calculation of conceptual design parameters; and the conducting of pilot-scale studies to confirm applicability. The final two sections of this document provide listings of the key success factors which will be used for evaluating the utility of the Decision Tool and descriptions of potential applications for Decision Tool use.

  2. Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions

    Science.gov (United States)

    Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu

    2014-01-01

    Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics. PMID:24769917

  3. Characterization of thyroid cancer in mouse models using high-frequency quantitative ultrasound techniques

    Science.gov (United States)

    Lavarello, R. J.; Ridgway, W. R.; Sarwate, S.; Oelze, M. L.

    2013-01-01

    Currently, the evaluation of thyroid cancer relies on the use of fine needle aspiration biopsy as non-invasive imaging methods do not provide sufficient levels of accuracy for the diagnosis of this disease. In this study, the potential of quantitative ultrasound methods for characterizing thyroid tissues was studied using a rodent model ex vivo. A high-frequency ultrasonic scanning system (40 MHz) was used to scan thyroids extracted from mice that had spontaneously developed thyroid lesions (cancerous or benign). Three sets of mice were acquired having different predispositions to developing thyroid anomalies (a C-cell adenoma, a papillary thyroid carcinoma (PTC), and a follicular variant papillary thyroid carcinoma (FV-PTC)). A fourth set of mice did not develop thyroid anomalies (normal mice) and were used as controls. The backscatter coefficient was estimated from excised thyroid lobes for the different mice. From the backscatter coefficient versus frequency (25 to 45 MHz), the effective scatterer diameter (ESD) and effective acoustic concentration (EAC) were estimated. From the envelope of the backscattered signal, the homodyned K distribution was used to estimate the k parameter (ratio of coherent to incoherent signal energy) and the μ parameter (number of scatterers per resolution cell). Statistically significant differences were observed between the malignant thyroids and the normal thyroids based on the ESD, EAC and μ parameters. The mean values of the ESDs were 18.0 ± 0.92, 15.9 ± 0.81, and 21.5 ± 1.80 µm for the PTC, FV-PTC and the normal thyroids, respectively. The mean values of the EACs were 59.4 ± 1.74, 62.7 ± 1.61, and 52.9 ± 3.42 dB (mm−3) for the PTC, FV-PTC and the normal thyroids, respectively. The mean values of the μ parameters were 2.55 ± 0.37, 2.59 ± 0.43, and 1.56 ± 0.99 for the PTC, FV-PTC and the normal thyroids, respectively. Statistically significant differences were observed between the malignant thyroids and the C

  4. Quantitative Validation of the Integrated Medical Model (IMM) for ISS Missions

    Science.gov (United States)

    Young, Millennia; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    Lifetime Surveillance of Astronaut Health (LSAH) provided observed medical event data on 33 ISS and 111 STS person-missions for use in further improving and validating the Integrated Medical Model (IMM). Using only the crew characteristics from these observed missions, the newest development version, IMM v4.0, will simulate these missions to predict medical events and outcomes. Comparing IMM predictions to the actual observed medical event counts will provide external validation and identify areas of possible improvement. In an effort to improve the power of detecting differences in this validation study, the total over each program ISS and STS will serve as the main quantitative comparison objective, specifically the following parameters: total medical events (TME), probability of loss of crew life (LOCL), and probability of evacuation (EVAC). Scatter plots of observed versus median predicted TMEs (with error bars reflecting the simulation intervals) will graphically display comparisons while linear regression will serve as the statistical test of agreement. Two scatter plots will be analyzed 1) where each point reflects a mission and 2) where each point reflects a condition-specific total number of occurrences. The coefficient of determination (R2) resulting from a linear regression with no intercept bias (intercept fixed at zero) will serve as an overall metric of agreement between IMM and the real world system (RWS). In an effort to identify as many possible discrepancies as possible for further inspection, the -level for all statistical tests comparing IMM predictions to observed data will be set to 0.1. This less stringent criterion, along with the multiple testing being conducted, should detect all perceived differences including many false positive signals resulting from random variation. The results of these analyses will reveal areas of the model requiring adjustment to improve overall IMM output, which will thereby provide better decision support for

  5. A model for the harmonisation of test results of different quantitative D-dimer methods.

    Science.gov (United States)

    Meijer, Piet; Haverkate, Frits; Kluft, Cornelis; de Moerloose, Philippe; Verbruggen, Bert; Spannagl, Michael

    2006-03-01

    The numerical test results of different D-dimer assays vary widely. Because of the complexity of the analyte of target as well as the variability in specificity of different D-dimer assays, only harmonisation of the test results seems to be feasible. The use of a single conversion factor does not take into account for several methods the lack of commutability between test results and consensus values at different D-dimer levels. This is probably related to the mutually different response of methods to high and low levels. We therefore designed a harmonisation model based on the transformation of a method-specific regression line to a reference regression line. We used the data for the measurement of a set of plasma samples with different D-dimer levels by 353 different laboratories using 7 of the most frequently used quantitative D-dimer methods. For each method we calculated the method-specific consensus value for each sample. The overall median value was also estimated. Per method linear regression was applied throughout the method-specific consensus values using the amount of patient pooled plasma added to the different plasma samples as the independent variable. The line through the overall median values of all 7 methods was used as the reference line. Harmonisation between the methods was obtained by transformation of the method-specific regression line to the reference line. This harmonisation resulted in a reduction of the variability between the method-specific consensus values from about 75% to about 5.5%. Clinical validation of this concept had shown significant improvement of the test result comparability. We conclude that this model is a feasible approach in the harmonisation of D-dimer methods. If the harmonisation procedure is included in the calibration procedure by the manufacturers, customers will automatically obtain harmonised test results.

  6. Numerical run-out modelling used for reassessment of existing permanent avalanche paths in the Krkonose Mts., Czechia

    Science.gov (United States)

    Blahut, Jan; Klimes, Jan; Balek, Jan; Taborik, Petr; Juras, Roman; Pavlasek, Jiri

    2015-04-01

    Run-out modelling of snow avalanches is being widely applied in high mountain areas worldwide. This study presents application of snow avalanche run-out calculation applied to mid-mountain ranges - the Krkonose, Jeseniky and Kralicky Sneznik Mountains. All mentioned mountain ranges lie in the northern part of Czechia, close to the border with Poland. Its highest peak reaches only 1602 m a.s.l. However, climatic conditions and regular snowpack presence are the reason why these mountain ranges experience considerable snow avalanche activity every year, sometimes resulting in injuries or even fatalities. Within the aim of an applied project dealing with snow avalanche hazard prediction a re-assessment of permanent snow avalanche paths has been performed based on extensive statistics covering period from 1961/62 till present. On each avalanche path different avalanches with different return periods were modelled using the RAMMS code. As a result, an up-to-date snow avalanche hazard map was prepared.

  7. MODIS volcanic ash retrievals vs FALL3D transport model: a quantitative comparison

    Science.gov (United States)

    Corradini, S.; Merucci, L.; Folch, A.

    2010-12-01

    Satellite retrievals and transport models represents the key tools to monitor the volcanic clouds evolution. Because of the harming effects of fine ash particles on aircrafts, the real-time tracking and forecasting of volcanic clouds is key for aviation safety. Together with the security reasons also the economical consequences of a disruption of airports must be taken into account. The airport closures due to the recent Icelandic Eyjafjöll eruption caused millions of passengers to be stranded not only in Europe, but across the world. IATA (the International Air Transport Association) estimates that the worldwide airline industry has lost a total of about 2.5 billion of Euro during the disruption. Both security and economical issues require reliable and robust ash cloud retrievals and trajectory forecasting. The intercomparison between remote sensing and modeling is required to assure precise and reliable volcanic ash products. In this work we perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands operating in the VIS-TIR spectral range and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 micron have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. Three MODIS images collected the October 28, 29 and 30 on Mt. Etna volcano during the 2002 eruption have been considered as test cases. The results show a general good agreement between the retrieved and the modeled volcanic clouds in the first 300 km from the vents. Even if the

  8. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    Science.gov (United States)

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  9. Quantitative acid-base physiology using the Stewart model. Does it improve our understanding of what is really wrong?

    NARCIS (Netherlands)

    Derksen, R.; Scheffer, G.J.; Hoeven, J.G. van der

    2006-01-01

    Traditional theories of acid-base balance are based on the Henderson-Hasselbalch equation to calculate proton concentration. The recent revival of quantitative acid-base physiology using the Stewart model has increased our understanding of complicated acid-base disorders, but has also led to several

  10. Physically based dynamic run-out modelling for quantitative debris flow risk assessment: a case study in Tresenda, northern Italy

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.

    2014-01-01

    Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014

  11. Parasite to patient: A quantitative risk model for Trichinella spp. in pork and wild boar meat.

    Science.gov (United States)

    Franssen, Frits; Swart, Arno; van der Giessen, Joke; Havelaar, Arie; Takumi, Katsuhisa

    2017-01-16

    Consumption of raw or inadequately cooked pork meat may result in trichinellosis, a human disease due to nematodes of the genus Trichinella. In many countries worldwide, individual control of pig carcasses at meat inspection is mandatory but incurs high costs in relation to absence of positive carcasses from pigs reared under controlled housing. EU regulation 2015/1375 implements an alternative risk-based approach, in view of absence of positive findings in pigs under controlled housing conditions. Moreover, Codex Alimentarius guidelines for the control of Trichinella spp. in meat of suidae have been published (CAC, 2015) and used in conjunction with the OIE terrestrial Animal health code, to provide guidance to governments and industry on risk based control measures to prevent human exposure to Trichinella spp. and to facilitate international pork trade. To further support such a risk-based approach, we model the risk of human trichinellosis due to consumption of meat from infected pigs, raised under non-controlled housing and wild boar, using Quantitative Microbial Risk Assessment (QMRA) methods. Our model quantifies the distribution of Trichinella muscle larve (ML) in swine, test sensitivity at carcass control, partitioning of edible pork parts, Trichinella ML distribution in edible muscle types, heat inactivation by cooking and portion sizes. The resulting exposure estimate is combined with a dose response model for Trichinella species to estimate the incidence of human illness after consumption of infected meat. Paramater estimation is based on experimental and observational datasets. In Poland, which served as example, we estimated an average incidence of 0.90 (95%CI: 0.00-3.68) trichinellosis cases per million persons per year (Mpy) due to consumption of pork from pigs that were reared under non-controlled housing, and 1.97 (95%CI: 0.82-4.00) cases per Mpy due to consumption of wild boar. The total estimated incidence of human trichinellosis attributed to

  12. A rapid and quantitative method to detect human circulating tumor cells in a preclinical animal model.

    Science.gov (United States)

    Tu, Shih-Hsin; Hsieh, Yi-Chen; Huang, Li-Chi; Lin, Chun-Yu; Hsu, Kai-Wen; Hsieh, Wen-Shyang; Chi, Wei-Ming; Lee, Chia-Hwa

    2017-06-23

    As cancer metastasis is the deadliest aspect of cancer, causing 90% of human deaths, evaluating the molecular mechanisms underlying this process is the major interest to those in the drug development field. Both therapeutic target identification and proof-of-concept experimentation in anti-cancer drug development require appropriate animal models, such as xenograft tumor transplantation in transgenic and knockout mice. In the progression of cancer metastasis, circulating tumor cells (CTCs) are the most critical factor in determining the prognosis of cancer patients. Several studies have demonstrated that measuring CTC-specific markers in a clinical setting (e.g., flow cytometry) can provide a current status of cancer development in patients. However, this useful technique has rarely been applied in the real-time monitoring of CTCs in preclinical animal models. In this study, we designed a rapid and reliable detection method by combining a bioluminescent in vivo imaging system (IVIS) and quantitative polymerase chain reaction (QPCR)-based analysis to measure CTCs in animal blood. Using the IVIS Spectrum CT System with 3D-imaging on orthotropic-developed breast-tumor-bearing mice. In this manuscript, we established a quick and reliable method for measuring CTCs in a preclinical animal mode. The key to this technique is the use of specific human and mouse GUS primers on DNA/RNA of mouse peripheral blood under an absolute qPCR system. First, the high sensitivity of cancer cell detection on IVIS was presented by measuring the luciferase carried MDA-MB-231 cells from 5 to 5x10(11) cell numbers with great correlation (R(2) = 0.999). Next, the MDA-MB-231 cell numbers injected by tail vein and their IVIS radiance signals were strongly corrected with qPCR-calculated copy numbers (R(2) > 0.99). Furthermore, by applying an orthotropic implantation animal model, we successfully distinguished xenograft tumor-bearing mice and control mice with a significant difference (p < 0

  13. Towards a Quantitative Framework for Modelling the Composition of Clastic Sediments

    Science.gov (United States)

    von Eynatten, H.

    2003-04-01

    The composition of clastic sediments is controlled by a range of processes beginning with initial weathering and erosion of rocks in the source area, followed by abrasion, mixing, and sorting during sediment transport and deposition, and, finally, compaction, authigenesis and intrastratal solution in the course of diagenesis. All of these processes cause specific changes in sediment composition (petrography, chemistry, grain size). Thus, any sediment may be described by summing the incremental changes caused by individual processes, each of it acting at varying intensity depending on the regional geological conditions (e.g., climate, topography, fluid flow). The aim of this contribution is to outline a quantitative model to describe sediment composition. Data on sediment composition are usually compositional data that means each component is non-negative and all components sum to a constant (usually 100%). The sample space for such compositional data is not the real space but the open simplex. Special techniques are necessary to analyse compositional data with statistical rigour. In this contribution an attempt is made to model compositional changes using mathematical operations in the simplex like perturbation, power transformation, and non-centered principal component analysis. This modelling approach can be applied to a wide range of processes and combinations of processes causing compositional changes of soils, sediments, or rocks. To demonstrate the potential of the approach we have chosen two examples from the literature dealing with the initial part of the sedimentary cycle: (i) chemical weathering of granitoid source rocks, both on a local and on a global scale, and (ii) mechanical weathering of high-grade metamorphic and granitoid source rocks leading to comminution and sorting of chemically unaltered detritus. The two examples, based on chemical major element data, demonstrate the usefulness of the approach for quantifying the degree of weathering. In a

  14. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors.

    Directory of Open Access Journals (Sweden)

    Igor Shuryak

    Full Text Available Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1 bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA; (2 fungi isolated from the Chernobyl nuclear-power plant (Ukraine buildings after the accident; (3 yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s among biologically-plausible alternatives. Our analysis suggests the following: (1 Both radionuclides and co-occurring chemical contaminants (e.g. NO2 are important for explaining microbial responses to radioactive contamination. (2 Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3 The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4 Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1 the most severe effects (e.g. extinction on microbial populations may occur when unfavorable environmental

  15. Quantitative x-ray microanalysis of model biological samples in the SEM using remote standards and the XPP analytical model.

    Science.gov (United States)

    Marshall, Alan T

    2017-06-01

    It is shown that accurate x-ray microanalysis of frozen-hydrated and dry organic compounds, such as model biological samples, is possible with a silicon drift detector in combination with XPP (exponential model of Pouchou and Pichoir matrix correction) software using 'remote standards'. This type of analysis is also referred to as 'standardless analysis'. Analyses from selected areas or elemental images (maps) were identical. Improvements in x-ray microanalytical hardware and software, together with developments in cryotechniques, have made the quantitative analysis of cryoplaned frozen-hydrated biological samples in the scanning electron microscope a much simpler procedure. The increased effectiveness of pulse pile-up rejection renders the analysis of Na, with ultrathin window detectors, in the presence of very high concentrations of O, from ice, more accurate. The accurate analysis of Ca (2 mmol kg-1 ) in the presence of high concentrations of K is possible. Careful sublimation of surface frost from frozen-hydrated samples resulted in a small increase in analysed elemental concentrations. A more prolonged sublimation from the same resurfaced sample and other similar samples resulted in higher element concentrations. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  16. Quantitative modeling of clinical, cellular, and extracellular matrix variables suggest prognostic indicators in cancer: a model in neuroblastoma.

    Science.gov (United States)

    Tadeo, Irene; Piqueras, Marta; Montaner, David; Villamón, Eva; Berbegall, Ana P; Cañete, Adela; Navarro, Samuel; Noguera, Rosa

    2014-02-01

    Risk classification and treatment stratification for cancer patients is restricted by our incomplete picture of the complex and unknown interactions between the patient's organism and tumor tissues (transformed cells supported by tumor stroma). Moreover, all clinical factors and laboratory studies used to indicate treatment effectiveness and outcomes are by their nature a simplification of the biological system of cancer, and cannot yet incorporate all possible prognostic indicators. A multiparametric analysis on 184 tumor cylinders was performed. To highlight the benefit of integrating digitized medical imaging into this field, we present the results of computational studies carried out on quantitative measurements, taken from stromal and cancer cells and various extracellular matrix fibers interpenetrated by glycosaminoglycans, and eight current approaches to risk stratification systems in patients with primary and nonprimary neuroblastoma. New tumor tissue indicators from both fields, the cellular and the extracellular elements, emerge as reliable prognostic markers for risk stratification and could be used as molecular targets of specific therapies. The key to dealing with personalized therapy lies in the mathematical modeling. The use of bioinformatics in patient-tumor-microenvironment data management allows a predictive model in neuroblastoma.

  17. Pathfinder to EXIST: ProtoEXIST

    Science.gov (United States)

    Garson, A. B., III; Allen, B.; Baker, R. G.; Barthelmy, S. D.; Burke, M.; Burnham, J.; Chammas, N.; Collins, J.; Cook, W. R.; Copete, A.; Gehrels, N.; Gauron, T.; Grindlay, J.; Harrison, F. A.; Hong, J.; Howell, J.; Krawczynski, H.; Labov, S.; Said, B.; Sheikh Sheikh, S.

    2008-04-01

    We describe the ProtoEXIST instrument, our fist-generation wide-field hard X-ray imaging (20 - 600 keV) balloon-borne telescope. The ProtoEXIST program is a pathfinder for the Energetic X-ray Imaging Survey Telescope (EXIST), a candidate for the Black Hole Finder Probe. ProtoEXIST consists of two independent coded-aperture telescopes using pixellated (2.5mm pitch) CZT detectors. The two telescopes will provide performance comparison of two shielding configurations, for optimization of the EXIST design. We report on the science goals and designs of both ProtoEXIST and EXIST and their implications for hard X-ray astronomy and astrophysics.

  18. Qualitative and quantitative structure-activity relationship modelling for predicting blood-brain barrier permeability of structurally diverse chemicals.

    Science.gov (United States)

    Gupta, S; Basant, N; Singh, K P

    2015-01-01

    In this study, structure-activity relationship (SAR) models have been established for qualitative and quantitative prediction of the blood-brain barrier (BBB) permeability of chemicals. The structural diversity of the chemicals and nonlinear structure in the data were tested. The predictive and generalization ability of the developed SAR models were tested through internal and external validation procedures. In complete data, the QSAR models rendered ternary classification accuracy of >98.15%, while the quantitative SAR models yielded correlation (r(2)) of >0.926 between the measured and the predicted BBB permeability values with the mean squared error (MSE) 82.7% and r(2) > 0.905 (MSE quantitative models for predicting the BBB permeability of chemicals. Moreover, these models showed predictive performance superior to those reported earlier in the literature. This demonstrates the appropriateness of the developed SAR models to reliably predict the BBB permeability of new chemicals, which can be used for initial screening of the molecules in the drug development process.

  19. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  20. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait.

    Science.gov (United States)

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-12-31

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.

  1. A bibliography of terrain modeling (geomorphometry), the quantitative representation of topography: supplement 4.0

    Science.gov (United States)

    Pike, Richard J.

    2002-01-01

    Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement

  2. Can a quantitative simulation of an Otto engine be accurately rendered by a simple Novikov model with heat leak?

    Science.gov (United States)

    Fischer, A.; Hoffmann, K.-H.

    2004-03-01

    In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.

  3. Quantitative structure–activity relationship based modeling of substituted indole Schiff bases as inhibitor of COX-2

    OpenAIRE

    Dwivedi, Amrita; Singh, Ajeet; Srivastava, A. K.

    2016-01-01

    We have performed the quantitative structure activity relationship (QSAR) study for N-1 and C-3 substituted indole shiff bases to understand the structural features that influence the inhibitory activity toward the cyclooxygenase-2 (COX-2) enzyme. The calculated QSAR results revealed that the drug activity could be modeled by using molecular connectivity indices (0χ, 1χ, 2χ), wiener index (W) and mean wiener index (WA) parameters. The predictive ability of models was cross validated by evalua...

  4. Statistical Modeling Approach to Quantitative Analysis of Interobserver Variability in Breast Contouring

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jinzhong, E-mail: jyang4@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhang, Lifei; Balter, Peter; Court, Laurence E. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, X. Allen [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Dong, Lei [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Scripps Proton Therapy Center, San Diego, California (United States)

    2014-05-01

    Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively

  5. Quantitative plant resistance in cultivar mixtures: wheat yellow rust as a modelling case study.

    OpenAIRE

    Sapoukhina, Natalia; Paillard, Sophie; Dedryver-Person, Françoise; Pope De Vallavieille, Claude,

    2013-01-01

    Unlike qualitative plant resistance, which confers immunity to disease, quantitative resistance confers only a reduction in disease severity and this can be nonspecific. Consequently, the outcome of its deployment in cultivar mixtures is not easy to predict, as on the one hand it may reduce the heterogeneity of the mixture, but on the other it may induce competition between nonspecialized strains of the pathogen. To clarify the principles for the successful use of quantitative plant resistanc...

  6. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  7. Quantitative multiparametric MRI assessment of glioma response to radiotherapy in a rat model.

    Science.gov (United States)

    Hong, Xiaohua; Liu, Li; Wang, Meiyun; Ding, Kai; Fan, Ying; Ma, Bo; Lal, Bachchu; Tyler, Betty; Mangraviti, Antonella; Wang, Silun; Wong, John; Laterra, John; Zhou, Jinyuan

    2014-06-01

    The inability of structural MRI to accurately measure tumor response to therapy complicates care management for patients with gliomas. The purpose of this study was to assess the potential of several noninvasive functional and molecular MRI biomarkers for the assessment of glioma response to radiotherapy. Fourteen U87 tumor-bearing rats were irradiated using a small-animal radiation research platform (40 or 20 Gy), and 6 rats were used as controls. MRI was performed on a 4.7 T animal scanner, preradiation treatment, as well as at 3, 6, 9, and 14 days postradiation. Image features of the tumors, as well as tumor volumes and animal survival, were quantitatively compared. Structural MRI showed that all irradiated tumors still grew in size during the initial days postradiation. The apparent diffusion coefficient (ADC) values of tumors increased significantly postradiation (40 and 20 Gy), except at day 3 postradiation, compared with preradiation. The tumor blood flow decreased significantly postradiation (40 and 20 Gy), but the relative blood flow (tumor vs contralateral) did not show a significant change at most time points postradiation. The amide proton transfer weighted (APTw) signals of the tumor decreased significantly at all time points postradiation (40 Gy), and also at day 9 postradiation (20 Gy). The blood flow and APTw maps demonstrated tumor features that were similar to those seen on gadolinium-enhanced T1-weighted images. Tumor ADC, blood flow, and APTw were all useful imaging biomarkers by which to predict glioma response to radiotherapy. The APTw signal was most promising for early response assessment in this model. © The Author(s) 2013. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Pedagogical implications of approaches to study in distance learning: developing models through qualitative and quantitative analysis.

    Science.gov (United States)

    Carnwell, R

    2000-05-01

    The need for flexibility in the delivery of nurse education has been identified by various initiatives including: widening the entry gate; continuous professional development; and the specialist practitioner. Access to degree level programmes is creating the need to acquire academic credit through flexible learning. The aim of this study was to further develop relationships between the need for guidance, materials design and learning styles and strategies and how these impact upon the construction of meaning. The study is based on interviews of 20 female community nurses purposively selected from the 96 respondents who had previously completed a survey questionnaire. The interviews were underpinned by theories relating to learning styles and approaches to study. Of particular concern was how these variables are mediated by student context, personal factors and materials design, to influence the need for support and guidance. The interview transcripts were first analysed using open and axial coding. Three approaches to study emerged from the data - systematic waders, speedy-focusers and global dippers - which were linked to other concepts and categories. Categories were then assigned numerical codes and subjected to logistical regression analysis. The attributes of the three approaches to study, arising from both qualitative and quantitative analysis, are explained in detail. The pedagogical implications of the three approaches to study are explained by their predicted relationships to other variables, such as support and guidance, organization of study, materials design and role of the tutor. The global dipper approach is discussed in more detail due to its association with a variety of predictor variables, not associated with the other two approaches to study. A feedback model is then developed to explore the impact of guidance on the global dipper approach. The paper makes recommendations for guidance to students using different approaches to study in distance

  9. 3D vs 2D laparoscopic systems: Development of a performance quantitative validation model.

    Science.gov (United States)

    Ghedi, Andrea; Donarini, Erica; Lamera, Roberta; Sgroi, Giovanni; Turati, Luca; Ercole, Cesare

    2015-01-01

    The new technology ensures 3D laparoscopic vision by adding depth to the traditional two dimensions. This realistic vision gives the surgeon the feeling of operating in real space. Hospital of Treviglio-Caravaggio isn't an university or scientific institution; in 2014 a new 3D laparoscopic technology was acquired therefore it led to evaluation of the of the appropriateness in term of patient outcome and safety. The project aims at achieving the development of a quantitative validation model that would ensure low cost and a reliable measure of the performance of 3D technology versus 2D mode. In addition, it aims at demonstrating how new technologies, such as open source hardware and software and 3D printing, could help research with no significant cost increase. For these reasons, in order to define criteria of appropriateness in the use of 3D technologies, it was decided to perform a study to technically validate the use of the best technology in terms of effectiveness, efficiency and safety in the use of a system between laparoscopic vision in 3D and the traditional 2D. 30 surgeons were enrolled in order to perform an exercise through the use of laparoscopic forceps inside a trainer. The exercise consisted of having surgeons with different level of seniority, grouped by type of specialization (eg. surgery, urology, gynecology), exercising videolaparoscopy with two technologies (2D and 3D) through the use of a anthropometric phantom. The target assigned to the surgeon was that to pass "needle and thread" without touching the metal part in the shortest time possible. The rings selected for the exercise had each a coefficient of difficulty determined by depth, diameter, angle from the positioning and from the point of view. The analysis of the data collected from the above exercise has mathematically confirmed that the 3D technique ensures a learning curve lower in novice and greater accuracy in the performance of the task with respect to 2D.

  10. Quantitative Analysis and Modeling of 3-D TSV-Based Power Delivery Architectures

    Science.gov (United States)

    He, Huanyu

    As 3-D technology enters the commercial production stage, it is critical to understand different 3-D power delivery architectures on the stacked ICs and packages with through-silicon vias (TSVs). Appropriate design, modeling, analysis, and optimization approaches of the 3-D power delivery system are of foremost significance and great practical interest to the semiconductor industry in general. Based on fundamental physics of 3-D integration components, the objective of this thesis work is to quantitatively analyze the power delivery for 3D-IC systems, develop appropriate physics-based models and simulation approaches, understand the key issues, and provide potential solutions for design of 3D-IC power delivery architectures. In this work, a hybrid simulation approach is adopted as the major approach along with analytical method to examine 3-D power networks. Combining electromagnetic (EM) tools and circuit simulators, the hybrid approach is able to analyze and model micrometer-scale components as well as centimeter-scale power delivery system with high accuracy and efficiency. The parasitic elements of the components on the power delivery can be precisely modeled by full-wave EM solvers. Stack-up circuit models for the 3-D power delivery networks (PDNs) are constructed through a partition and assembly method. With the efficiency advantage of the SPICE circuit simulation, the overall 3-D system power performance can be analyzed and the 3-D power delivery architectures can be evaluated in a short computing time. The major power delivery issues are the voltage drop (IR drop) and voltage noise. With a baseline of 3-D power delivery architecture, the on-chip PDNs of TSV-based chip stacks are modeled and analyzed for the IR drop and AC noise. The basic design factors are evaluated using the hybrid approach, such as the number of stacked chips, the number of TSVs, and the TSV arrangement. Analytical formulas are also developed to evaluate the IR drop in 3-D chip stack in

  11. A preclinical animal model to assess the effect of pre-existing immunity on AAV-mediated gene transfer.

    Science.gov (United States)

    Li, Hua; Lin, Shih-Wen; Giles-Davis, Wynetta; Li, Yan; Zhou, Dongming; Xiang, Zhi Quan; High, Katherine A; Ertl, Hildegund C J

    2009-07-01

    Hepatic adeno-associated virus (AAV)-serotype 2-mediated gene transfer results in sustained transgene expression in experimental animals but not in human subjects. We hypothesized that loss of transgene expression in humans might be caused by immune memory mechanisms that become reactivated upon AAV vector transfer. Here, we tested the effect of immunological memory to AAV capsid on AAV-mediated gene transfer in a mouse model. Upon hepatic transfer of an AAV2 vector expressing human factor IX (hF.IX), mice immunized with adenovirus (Ad) vectors expressing AAV8 capsid before AAV2 transfer developed less circulating hF.IX and showed a gradual loss of hF.IX gene copies in liver cells as compared to control animals. This was not observed in mice immunized with an Ad vectors expressing AAV2 capsid before transfer of rAAV8-hF.IX vectors. The lower hF.IX expression was primarily linked to AAV-binding antibodies that lacked AAV-neutralizing activity in vitro rather than to AAV capsid-specific CD8(+) T cells.

  12. The Comparison of Distributed P2P Trust Models Based on Quantitative Parameters in the File Downloading Scenarios

    Directory of Open Access Journals (Sweden)

    Jingpei Wang

    2016-01-01

    Full Text Available Varied P2P trust models have been proposed recently; it is necessary to develop an effective method to evaluate these trust models to resolve the commonalities (guiding the newly generated trust models in theory and individuality (assisting a decision maker in choosing an optimal trust model to implement in specific context issues. A new method for analyzing and comparing P2P trust models based on hierarchical parameters quantization in the file downloading scenarios is proposed in this paper. Several parameters are extracted from the functional attributes and quality feature of trust relationship, as well as requirements from the specific network context and the evaluators. Several distributed P2P trust models are analyzed quantitatively with extracted parameters modeled into a hierarchical model. The fuzzy inferring method is applied to the hierarchical modeling of parameters to fuse the evaluated values of the candidate trust models, and then the relative optimal one is selected based on the sorted overall quantitative values. Finally, analyses and simulation are performed. The results show that the proposed method is reasonable and effective compared with the previous algorithms.

  13. Differences in midline fascial forces exist following laparoscopic and open transversus abdominis release in a porcine model.

    Science.gov (United States)

    Winder, Joshua S; Lyn-Sue, Jerome; Kunselman, Allen R; Pauli, Eric M

    2017-02-01

    Posterior component separation herniorrhaphy via transversus abdominis release (TAR) permits midline reapproximation of large fascial defects. To date, no report delineates the reduction in tensile force to reapproximate midline fascia following TAR. We hypothesized that open and laparoscopic TAR would provide similar reductions in midline reapproximation forces in a porcine model. Under general anesthesia, a 20-cm midline laparotomy was created and bilateral lipocutaneous flaps were raised to expose the anterior rectus sheath. Five stainless steel hooks were placed at 1-cm intervals lateral to the midline at three locations: 5 cm above, at, and 5 cm below the umbilicus bilaterally. Baseline force measurements were taken by pulling each lateral point to midline. Laparoscopic TAR was performed unilaterally by incising the parietal peritoneum and transversus muscle lateral to the linea semilunaris. Open TAR was performed contralaterally, and force measurements were repeated. Comparisons were made to baseline and between the groups. Following laparoscopic TAR, 87 % (13/15) of points showed significant reduction compared to baseline forces, whereas only 20 % (3/15) of open TAR points had significant force reductions. Compared to open TAR, three locations favored the laparoscopic approach [1 cm lateral to midline, 5 cm above the umbilicus (p = 0.04; 95 % CI 0.78-1.00), 2 cm lateral to midline at the umbilicus (p = 0.04; 95 % CI 0.80-1.00), and 1 cm lateral to midline 5 cm below the umbilicus (p = 0.05; 95 % CI 0.79-1.00)]. The mean length of TAR was longer for laparoscopic than open at 27.29 versus 19.55 cm (p force at few locations, suggesting that the mechanism by which TAR facilitates herniorraphy may not solely be through reductions in linea alba tensile forces. At specific locations, laparoscopic TAR provides superior reduction in midline closure force compared to open TAR, likely as a result of a longer muscle release.

  14. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose.

    Science.gov (United States)

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K; Abaidoo, Robert C; Dalsgaard, Anders; Hald, Tine

    2017-12-01

    The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10 -5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio to estimate the norovirus count. In all scenarios of using different water sources, the application of the fecal indicator conversion ratio underestimated the norovirus disease burden, measured by the Disability Adjusted Life Years (DALYs), when compared to results using the genome copies norovirus data. In some cases the difference was >2 orders of magnitude. All scenarios using genome copies met the 10 -4 DALY per person per year for consumption of vegetables irrigated with wastewater, although these results are considered to be highly conservative risk estimates. The fecal indicator conversion ratio model of stream-water and drain-water sources of wastewater achieved the 10 -6 DALY per person per year threshold, which tends to indicate an underestimation of health risk when compared to using genome copies for estimating the dose. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  16. Demographic Changes and Real Estate Values. A Quantitative Model for Analyzing the Urban-Rural Linkages

    Directory of Open Access Journals (Sweden)

    Massimiliano Bencardino

    2017-03-01

    Full Text Available Vast metropolitan areas include both urban areas and rural outskirts. Between these areas, there are strong links to the point which they cannot be examined separately. There is a contemporary presence of residential function and working activity in the rural outskirts, as well as in the typical sector of agriculture. Therefore, the production of goods and services for the city requires a combined analysis, due to the large territory which it has to consider. The evolution of the population of such a large territory can be studied in great detail, with reference to the single census area and with the use of Geographic Information Systems (GIS. This means that such a demographic development produces an effect on the values of the urban real estate. This work demonstrates the existing interconnections between urban areas and rural outskirts. Data collection on trends of the population living in the Naples metropolitan area and the house prices associated with this area, and the post spatial processing of such data, allows for the establishment of thematic maps according to which a model capable of interpreting the population development is defined. A study of the statistical correlations shows the consequences that the population dynamics produce for property prices. In addition, the diachronic analysis of the sales prices of residential buildings demonstrates that economic functions, exclusive of certain urban or rural territories, end up distributing and integrating.

  17. Do multiquark hadrons exist

    Energy Technology Data Exchange (ETDEWEB)

    Weinstein, J.; Isgur, N.

    1982-03-08

    The qqq-barq-bar system has been examined by solving the four-particle Schroedinger equation variationally. The main findings are that: (1) qqq-barq-bar bound states normally do not exist, (2) the cryptoexotic 0/sup + +/ sector of this system with KK-bar quantum numbers is probably the only exception to (1) and its bound states can be identified with the S* and delta just below KK-bar threshold, (3) qqq-barq-bar bound states provide a model for the weak binding and color-singlet clustering observed in nuclei, and (4) there is no indication that this system has strong resonances.

  18. Quantitative structure-activity relationship modeling of polycyclic aromatic hydrocarbon mutagenicity by classification methods based on holistic theoretical molecular descriptors.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Marrocchi, Assunta; Minuti, Lucio; Taticchi, Aldo

    2007-03-01

    Various polycyclic aromatic hydrocarbons (PAHs), ubiquitous environmental pollutants, are recognized mutagens and carcinogens. A homogeneous set of mutagenicity data (TA98 and TA100,+S9) for 32 benzocyclopentaphenanthrenes/chrysenes was modeled by the quantitative structure-activity relationship classification methods k-nearest neighbor and classification and regression tree, using theoretical holistic molecular descriptors. Genetic algorithm provided the selection of the best subset of variables for modeling mutagenicity. The models were validated by leave-one-out and leave-50%-out approaches and have good performance, with sensitivity and specificity ranges of 90-100%. Mutagenicity assessment for these PAHs requires only a few theoretical descriptors of their molecular structure.

  19. Quantitative hydro-geophysical monitoring and coupled modeling of a controlled irrigation experiment

    Science.gov (United States)

    Cassiani, Giorgio; Rossi, Matteo; Manoli, Gabriele; Pasetto, Damiano; Deiana, Rita; Ferraris, Stefano; Strobbia, Claudio; Putti, Mario

    2014-05-01

    Geophysical surveys provide useful indirect information on vadose zone processes. However, the ability to supply a quantitative description of the subsurface phenomena remains to be proven. A controlled infiltration experiment is here extensively monitored by both ERT and GPR surveys. The experimental site is located nearby the campus of the Agricultural Faculty of the University of Turin, Italy, in Grugliasco. The experimental field is chosen for the plain and well-characterized subsoil geometry: the shallower vadose zone is composed by eolic sand with homogeneous isotropic properties. In these quasi-ideal conditions the geophysical data are accurately examined to achieve the potential knowledge on the water processes and to identify possible misleading information. Field ERT data have been compared with numerical simulations using both traditional uncoupled hydrogeophysical inversion and an innovative Bayesian framework for coupled hydrogeophysical modeling. The coupled data assimilation process is able to estimate reliable hydrological parameters and to reproduce the proper evolution of the water plume in the vadose zone. The uncoupled approach leads to misleading estimations of hydrological quantities, that are essentially due to the geophysical inversion procedure. The lack of knowledge in the inversion process may generate artifacts in the geophysical parameter distributions, which shall be translated in uncorrected hydrological states. GPR data are used separately to analyze capabilities and limitations of this technique in unsaturated environment. GPR surveys on the topographic surface could be wrong analyzed if a clear understanding of the wave propagation in the soil is not realized. So, where a straightforward interpretation of direct and reflected waves is not possible, the presence of guided modes of propagation must be deeply examined to achieve useful information on fluid flow dynamics. The results clearly demonstrate that two key points are

  20. Application of a Bayesian dominance model improves power in quantitative trait genome-wide association analysis.

    Science.gov (United States)

    Bennewitz, Jörn; Edel, Christian; Fries, Ruedi; Meuwissen, Theo H E; Wellmann, Robin

    2017-01-14

    Multi-marker methods, which fit all markers simultaneously, were originally tailored for genomic selection purposes, but have proven to be useful also in association analyses, especially the so-called BayesC Bayesian methods. In a recent study, BayesD extended BayesC towards accounting for dominance effects and improved prediction accuracy and persistence in genomic selection. The current study investigated the power and precision of BayesC and BayesD in genome-wide association studies by means of stochastic simulations and applied these methods to a dairy cattle dataset. The simulation protocol was designed to mimic the genetic architecture of quantitative traits as realistically as possible. Special emphasis was put on the joint distribution of the additive and dominance effects of causative mutations. Additive marker effects were estimated by BayesC and additive and dominance effects by BayesD. The dependencies between additive and dominance effects were modelled in BayesD by choosing appropriate priors. A sliding-window approach was used. For each window, the R. Fernando window posterior probability of association was calculated and this was used for inference purpose. The power to map segregating causal effects and the mapping precision were assessed for various marker densities up to full sequence information and various window sizes. Power to map a QTL increased with higher marker densities and larger window sizes. This held true for both methods. Method BayesD had improved power compared to BayesC. The increase in power was between -2 and 8% for causative genes that explained more than 2.5% of the genetic variance. In addition, inspection of the estimates of genomic window dominance variance allowed for inference about the magnitude of dominance at significant associations, which remains hidden in BayesC analysis. Mapping precision was not substantially improved by BayesD. BayesD improved power, but precision only slightly. Application of BayesD needs large

  1. The influence of pulleys on the quantitative characteristics of medial rectus muscle recessions: the torque vector model.

    Science.gov (United States)

    Miller, Aaron M; Mims, James L

    2006-08-01

    To develop a new pulley-based torque vector mathematical model for medial rectus muscle recessions and compare it based on known clinical characteristics, to the currently accepted nonpulley length-tension model. The following quantitative characteristics of the results of bilateral medial rectus muscle recessions were chosen for study to see whether the new torque vector model or the classic length-tension model would better predict these characteristics: (1) larger bilateral medial rectus muscle recessions produce more effect per millimeter, with the dose-response curve approximating an exponential shape; (2) the exact location of the preplaced medial rectus muscle suture prior to muscle disinsertion in recessions has minimal effect on the postoperative ocular alignment; and (3) medial rectus muscle recessions of more than eight mm are likely to produce an early consecutive exotropia. Based on the documented location of the medial rectus muscle pulley, the change in the torque vector per millimeter of medial rectus muscle recession was calculated and shown to have an exponential shape. For all three of the quantitative characteristics chosen, the torque vector model appears to better predict the results of medial rectus muscle recessions when compared with the length-tension model. Many quantitative characteristics of medial rectus muscle recessions are better explained by the torque vector model, instead of the classical length-tension model, and support the presence, location, and function of the medial rectus muscle pulley. This new understanding of ocular motility mechanics may influence surgical technique and introduce new surgical considerations for correction of ocular motility disorders.

  2. Electrophoretic deposition: a quantitative model for particle deposition and binder formation from alcohol-based suspensions

    NARCIS (Netherlands)

    Beer, De E.; Duval, J.F.L.; Meulenkamp, E.A.

    2000-01-01

    We investigated electrophoretic deposition from a suspension containing positively charged particles, isopropanol, water, and Mg(NO3)2, with the aim of describing the deposition rates of the particles and Mg(OH)2, which is formed due to chemical reactions at the electrode, in terms of quantitative

  3. Modeling optical behavior of birefringent biological tissues for evaluation of quantitative polarized light microscopy

    NARCIS (Netherlands)

    Turnhout, van M.C.; Kranenbarg, S.; Leeuwen, van J.L.

    2009-01-01

    Quantitative polarized light microscopy (qPLM) is a popular tool for the investigation of birefringent architectures in biological tissues. Collagen, the most abundant protein in mammals, is such a birefringent material. Interpretation of results of qPLM in terms of collagen network architecture and

  4. Economic analysis of light brown apple moth using GIS and quantitative modeling

    Science.gov (United States)

    Glenn Fowler; Lynn Garrett; Alison Neeley; Roger Magarey; Dan Borchert; Brian. Spears

    2011-01-01

    We conducted an economic analysis of the light brown apple moth (LBAM), (piphyas postvittana (Walker)), whose presence in California has resulted in a regulatory program. Our objective was to quantitatively characterize the economic costs to apple, grape, orange, and pear crops that would result from LBAM's introduction into the continental...

  5. Quantitative methanol-burning lung model for validating gas-exchange measurements over wide ranges of FIO2.

    Science.gov (United States)

    Miodownik, S; Melendez, J; Carlon, V A; Burda, B

    1998-06-01

    The methanol-burning lung model has been used as a technique for generating a predictable ratio of carbon dioxide production (VCO2) to oxygen consumption (VO2) or respiratory quotient (RQ). Although an accurate RQ can be generated, quantitatively predictable and adjustable VO2 and VCO2 cannot be generated. We describe a new burner device in which the combustion rate of methanol is always equal to the infusion rate of fuel over an extended range of O2 concentrations. This permits the assembly of a methanol-burning lung model that is usable with O2 concentrations up to 100% and provides continuously adjustable and quantitative VO2 (69-1,525 ml/min) and VCO2 (46-1,016 ml/min) at a RQ of 0.667.

  6. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    Science.gov (United States)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit

  7. Quantitative modelling of the degradation processes of cement grout. Project CEMMOD

    Energy Technology Data Exchange (ETDEWEB)

    Grandia, Fidel; Galindez, Juan-Manuel; Arcos, David; Molinero, Jorge (Amphos21 Consulting S.L., Barcelona (Spain))

    2010-05-15

    Grout cement is planned to be used in the sealing of water-conducting fractures in the deep geological storage of spent nuclear fuel waste. The integrity of such cementitious materials should be ensured in a time framework of decades to a hundred of years as mimum. However, their durability must be quantified since grout degradation may jeopardize the stability of other components in the repository due to the potential release of hyperalkaline plumes. The model prediction of the cement alteration has been challenging in the last years mainly due to the difficulty to reproduce the progressive change in composition of the Calcium-Silicate-Hydrate (CSH) compounds as the alteration proceeds. In general, the data obtained from laboratory experiments show a rather similar dependence between the pH of pore water and the Ca-Si ratio of the CSH phases. The Ca-Si ratio decreases as the CSH is progressively replaced by Si-enriched phases. An elegant and reasonable approach is the use of solid solution models even keeping in mind that CSH phases are not crystalline solids but gels. An additional obstacle is the uncertainty in the initial composition of the grout to be considered in the calculations because only the recipe of low-pH clinker is commonly provided by the manufacturer. The hydration process leads to the formation of new phases and, importantly, creates porosity. A number of solid solution models have been reported in literature. Most of them assumed a strong non-ideal binary solid solution series to account for the observed changes in the Ca-Si ratios in CSH. However, it results very difficult to reproduce the degradation of the CSH in the whole Ca-Si range of compositions (commonly Ca/Si=0.5-2.5) by considering only two end-members and fixed nonideality parameters. Models with multiple non-ideal end-members with interaction parameters as a function of the solid composition can solve the problem but these can not be managed in the existing codes of reactive

  8. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  9. Novel Uses of In Vitro Data to Develop Quantitative Biological Activity Relationship Models for in Vivo Carcinogenicity Prediction.

    Science.gov (United States)

    Pradeep, Prachi; Povinelli, Richard J; Merrill, Stephen J; Bozdag, Serdar; Sem, Daniel S

    2015-04-01

    The availability of large in vitro datasets enables better insight into the mode of action of chemicals and better identification of potential mechanism(s) of toxicity. Several studies have shown that not all in vitro assays can contribute as equal predictors of in vivo carcinogenicity for development of hybrid Quantitative Structure Activity Relationship (QSAR) models. We propose two novel approaches for the use of mechanistically relevant in vitro assay data in the identification of relevant biological descriptors and development of Quantitative Biological Activity Relationship (QBAR) models for carcinogenicity prediction. We demonstrate that in vitro assay data can be used to develop QBAR models for in vivo carcinogenicity prediction via two case studies corroborated with firm scientific rationale. The case studies demonstrate the similarities between QBAR and QSAR modeling in: (i) the selection of relevant descriptors to be used in the machine learning algorithm, and (ii) the development of a computational model that maps chemical or biological descriptors to a toxic endpoint. The results of both the case studies show: (i) improved accuracy and sensitivity which is especially desirable under regulatory requirements, and (ii) overall adherence with the OECD/REACH guidelines. Such mechanism based models can be used along with QSAR models for prediction of mechanistically complex toxic endpoints. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  11. Multitasking models for quantitative structure-biological effect relationships: current status and future perspectives to speed up drug discovery.

    Science.gov (United States)

    Speck-Planche, Alejandro; Cordeiro, Maria Natália Dias Soeiro

    2015-03-01

    Drug discovery is the process of designing new candidate medications for the treatment of diseases. Over many years, drugs have been identified serendipitously. Nowadays, chemoinformatics has emerged as a great ally, helping to rationalize drug discovery. In this sense, quantitative structure-activity relationships (QSAR) models have become complementary tools, permitting the efficient virtual screening for a diverse number of pharmacological profiles. Despite the applications of current QSAR models in the search for new drug candidates, many aspects remain unresolved. To date, classical QSAR models are able to predict only one type of biological effect (activity, toxicity, etc.) against only one type of generic target. The present review discusses innovative and evolved QSAR models, which are focused on multitasking quantitative structure-biological effect relationships (mtk-QSBER). Such models can integrate multiple kinds of chemical and biological data, allowing the simultaneous prediction of pharmacological activities, toxicities and/or other safety profiles. The authors strongly believe, given the potential of mtk-QSBER models to simultaneously predict the dissimilar biological effects of chemicals, that they have much value as in silico tools for drug discovery. Indeed, these models can speed up the search for efficacious drugs in a number of areas, including fragment-based drug discovery and drug repurposing.

  12. Target based drug discovery for beta-globin disorders: Drug target prediction using quantitative modelling with hybrid functional Petri nets

    OpenAIRE

    Mehraei, Mani; Bashirov, Rza; Tüzmen, Şükrü

    2016-01-01

    Recent molecular studies provide important clues into treatment of beta-thalassemia, sickle-cell anaemia and other beta-globin disorders revealing that increased production of fetal hemoglobin, that is normally suppressed in adulthood, can ameliorate the severity of these diseases. In this paper, we present a novel approach for drug target prediction for beta-globin disorders. Our approach is centered upon quantitative modelling of interactions in human fetal-to-adult hemoglobin switch net...

  13. Comparative Analysis of Single-Species and Polybacterial Wound Biofilms Using a Quantitative, In Vivo, Rabbit Ear Model

    Science.gov (United States)

    2012-08-08

    biofilm behavior of mixed-species cultures with dental and periodontal pathogens. PLoS One 5(10): 131–135. 47. Ma H, Bryers JD (2010) Non-invasive method...Comparative Analysis of Single-Species and Polybacterial Wound Biofilms Using a Quantitative, In Vivo, Rabbit Ear Model Akhil K. Seth1*, Matthew R...Northwestern University, Chicago, Illinois, United States of America, 2 Microbiology Branch, US Army Dental and Trauma Research Detachment, Institute of Surgical

  14. A Combinational Strategy of Model Disturbance and Outlier Comparison to Define Applicability Domain in Quantitative Structural Activity Relationship.

    Science.gov (United States)

    Yan, Jun; Zhu, Wei-Wei; Kong, Bo; Lu, Hong-Bing; Yun, Yong-Huan; Huang, Jian-Hua; Liang, Yi-Zeng

    2014-08-01

    In order to define an applicability domain for quantitative structure-activity relationship modeling, a combinational strategy of model disturbance and outlier comparison is developed. An indicator named model disturbance index was defined to estimate the prediction error. Moreover, the information of the outliers in the training set was used to filter the unreliable samples in the test set based on "structural similarity". Chromatography retention indices data were used to investigate this approach. The relationship between model disturbance index and prediction error can be found. Also, the comparison between the outlier set and the test set could provide additional information about which unknown samples should be paid more attentions. A novel technique based on model population analysis was used to evaluate the validity of applicability domain. Finally, three commonly used methods, i.e. Leverage, descriptor range-based and model perturbation method, were compared with the proposed approach. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Defect evolution in cosmology and condensed matter quantitative analysis with the velocity-dependent one-scale model

    CERN Document Server

    Martins, C J A P

    2016-01-01

    This book sheds new light on topological defects in widely differing systems, using the Velocity-Dependent One-Scale Model to better understand their evolution. Topological defects – cosmic strings, monopoles, domain walls or others - necessarily form at cosmological (and condensed matter) phase transitions. If they are stable and long-lived they will be fossil relics of higher-energy physics. Understanding their behaviour and consequences is a key part of any serious attempt to understand the universe, and this requires modelling their evolution. The velocity-dependent one-scale model is the only fully quantitative model of defect network evolution, and the canonical model in the field. This book provides a review of the model, explaining its physical content and describing its broad range of applicability.

  16. Quantitative gene-gene and gene-environment mapping for leaf shape variation using tree-based models.

    Science.gov (United States)

    Fu, Guifang; Dai, Xiaotian; Symanzik, Jürgen; Bushman, Shaun

    2017-01-01

    Leaf shape traits have long been a focus of many disciplines, but the complex genetic and environmental interactive mechanisms regulating leaf shape variation have not yet been investigated in detail. The question of the respective roles of genes and environment and how they interact to modulate leaf shape is a thorny evolutionary problem, and sophisticated methodology is needed to address it. In this study, we investigated a framework-level approach that inputs shape image photographs and genetic and environmental data, and then outputs the relative importance ranks of all variables after integrating shape feature extraction, dimension reduction, and tree-based statistical models. The power of the proposed framework was confirmed by simulation and a Populus szechuanica var. tibetica data set. This new methodology resulted in the detection of novel shape characteristics, and also confirmed some previous findings. The quantitative modeling of a combination of polygenetic, plastic, epistatic, and gene-environment interactive effects, as investigated in this study, will improve the discernment of quantitative leaf shape characteristics, and the methods are ready to be applied to other leaf morphology data sets. Unlike the majority of approaches in the quantitative leaf shape literature, this framework-level approach is data-driven, without assuming any pre-known shape attributes, landmarks, or model structures. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  17. Physiological role of Kv1.3 channel in T lymphocyte cell investigated quantitatively by kinetic modeling.

    Directory of Open Access Journals (Sweden)

    Panpan Hou

    Full Text Available Kv1.3 channel is a delayed rectifier channel abundant in human T lymphocytes. Chronic inflammatory and autoimmune disorders lead to the over-expression of Kv1.3 in T cells. To quantitatively study the regulatory mechanism and physiological function of Kv1.3 in T cells, it is necessary to have a precise kinetic model of Kv1.3. In this study, we firstly established a kinetic model capable to precisely replicate all the kinetic features for Kv1.3 channels, and then constructed a T-cell model composed of ion channels including Ca2+-release activated calcium (CRAC channel, intermediate K+ (IK channel, TASK channel and Kv1.3 channel for quantitatively simulating the changes in membrane potentials and local Ca2+ signaling messengers during activation of T cells. Based on the experimental data from current-clamp recordings, we successfully demonstrated that Kv1.3 dominated the membrane potential of T cells to manipulate the Ca2+ influx via CRAC channel. Our results revealed that the deficient expression of Kv1.3 channel would cause the less Ca2+ signal, leading to the less efficiency in secretion. This was the first successful attempt to simulate membrane potential in non-excitable cells, which laid a solid basis for quantitatively studying the regulatory mechanism and physiological role of channels in non-excitable cells.

  18. Exploring international clinical education in US-based programs: identifying common practices and modifying an existing conceptual model of international service-learning.

    Science.gov (United States)

    Pechak, Celia M; Black, Jill D

    2014-02-01

    Increasingly physical therapist students complete part of their clinical training outside of their home country. This trend is understudied. The purposes of this study were to: (1) explore, in depth, various international clinical education (ICE) programs; and (2) determine whether the Conceptual Model of Optimal International Service-Learning (ISL) could be applied or adapted to represent ICE. Qualitative content analysis was used to analyze ICE programs and consider modification of an existing ISL conceptual model for ICE. Fifteen faculty in the United States currently involved in ICE were interviewed. The interview transcriptions were systematically analyzed by two researchers. Three models of ICE practices emerged: (1) a traditional clinical education model where local clinical instructors (CIs) focus on the development of clinical skills; (2) a global health model where US-based CIs provide the supervision in the international setting, and learning outcomes emphasized global health and cultural competency; and (3) an ICE/ISL hybrid where US-based CIs supervise the students, and the foci includes community service. Additionally the data supported revising the ISL model's essential core conditions, components and consequence for ICE. The ICE conceptual model may provide a useful framework for future ICE program development and research.

  19. A Quantitative bgl Operon Model for E. coli Requires BglF Conformational Change for Sugar Transport

    Science.gov (United States)

    Chopra, Paras; Bender, Andreas

    The bgl operon is responsible for the metabolism of β-glucoside sugars such as salicin or arbutin in E. coli. Its regulatory system involves both positive and negative feedback mechanisms and it can be assumed to be more complex than that of the more closely studied lac and trp operons. We have developed a quantitative model for the regulation of the bgl operon which is subject to in silico experiments investigating its behavior under different hypothetical conditions. Upon administration of 5mM salicin as an inducer our model shows 80-fold induction, which compares well with the 60-fold induction measured experimentally. Under practical conditions 5-10mM inducer are employed, which is in line with the minimum inducer concentration of 1mM required by our model. The necessity of BglF conformational change for sugar transport has been hypothesized previously, and in line with those hypotheses our model shows only minor induction if conformational change is not allowed. Overall, this first quantitative model for the bgl operon gives reasonable predictions that are close to experimental results (where measured). It will be further refined as values of the parameters are determined experimentally. The model was developed in Systems Biology Markup Language (SBML) and it is available from the authors and from the Biomodels repository [www.ebi.ac.uk/biomodels].

  20. 2D Hydrodynamic Based Logic Modeling Tool for River Restoration Decision Analysis: A Quantitative Approach to Project Prioritization

    Science.gov (United States)

    Bandrowski, D.; Lai, Y.; Bradley, N.; Gaeuman, D. A.; Murauskas, J.; Som, N. A.; Martin, A.; Goodman, D.; Alvarez, J.

    2014-12-01

    In the field of river restoration sciences there is a growing need for analytical modeling tools and quantitative processes to help identify and prioritize project sites. 2D hydraulic models have become more common in recent years and with the availability of robust data sets and computing technology, it is now possible to evaluate large river systems at the reach scale. The Trinity River Restoration Program is now analyzing a 40 mile segment of the Trinity River to determine priority and implementation sequencing for its Phase II rehabilitation projects. A comprehensive approach and quantitative tool has recently been developed to analyze this complex river system referred to as: 2D-Hydrodynamic Based Logic Modeling (2D-HBLM). This tool utilizes various hydraulic output parameters combined with biological, ecological, and physical metrics at user-defined spatial scales. These metrics and their associated algorithms are the underpinnings of the 2D-HBLM habitat module used to evaluate geomorphic characteristics, riverine processes, and habitat complexity. The habitat metrics are further integrated into a comprehensive Logic Model framework to perform statistical analyses to assess project prioritization. The Logic Model will analyze various potential project sites by evaluating connectivity using principal component methods. The 2D-HBLM tool will help inform management and decision makers by using a quantitative process to optimize desired response variables with balancing important limiting factors in determining the highest priority locations within the river corridor to implement restoration projects. Effective river restoration prioritization starts with well-crafted goals that identify the biological objectives, address underlying causes of habitat change, and recognizes that social, economic, and land use limiting factors may constrain restoration options (Bechie et. al. 2008). Applying natural resources management actions, like restoration prioritization, is

  1. Modelling a quantitative ensilability index adapted to forages from wet temperate areas

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Fernandez, A.; Soldado, A.; Roza-Delgado, B. de la; Vicente, F.; Gonzalez-Arrojo, M. A.; Argamenteria, A.

    2013-06-01

    Forage ensilability mainly depends on dry matter (DM), water soluble carbohydrates (WSC) and buffer capacity (BC) values at harvest time. According to these parameters, and based on a collection of 208 forages of known ensilability characteristics including short and long term meadows for grazing, italian ryegrass, maize, triticale, soybean, faba bean crops, and samples coming from cereal-legume associations, the objective of this study has been to define a quantitative ensilability index (EI) based on a relationship between DM, WSC and BC contents at harvest date, adapted to the characteristics of fodder from wet temperate areas. For this purpose, a discriminant procedure was used to define this EI based on a linear combination of DM, WSC and BC of forages at harvest time. The quantitative calculated indexes distinguish five successive ranges of ensilability: high ensilability (EI > +28), medium high ensilability (+9 < EI . +28), medium ensilability (.28 < EI . +9), medium low ensilability (-47 {<=} EI {<=}-28) and low ensilability (EI < .47). This quantitative index was externally evaluated and 100% of samples were successfully classified. (Author) 28 refs.

  2. Multivariate regression models for the simultaneous quantitative analysis of calcium and magnesium carbonates and magnesium oxide through drifts data

    Directory of Open Access Journals (Sweden)

    Marder Luciano

    2006-01-01

    Full Text Available In the present work multivariate regression models were developed for the quantitative analysis of ternary systems using Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS to determine the concentration in weight of calcium carbonate, magnesium carbonate and magnesium oxide. Nineteen spectra of standard samples previously defined in ternary diagram by mixture design were prepared and mid-infrared diffuse reflectance spectra were recorded. The partial least squares (PLS regression method was applied to the model. The spectra set was preprocessed by either mean-centered and variance-scaled (model 2 or mean-centered only (model 1. The results based on the prediction performance of the external validation set expressed by RMSEP (root mean square error of prediction demonstrated that it is possible to develop good models to simultaneously determine calcium carbonate, magnesium carbonate and magnesium oxide content in powdered samples that can be used in the study of the thermal decomposition of dolomite rocks.

  3. UPDATE February 2012 - The Food Crises: Predictive validation of a quantitative model of food prices including speculators and ethanol conversion

    CERN Document Server

    Lagi, Marco; Bertrand, Karla Z; Bar-Yam, Yaneer

    2012-01-01

    Increases in global food prices have led to widespread hunger and social unrest---and an imperative to understand their causes. In a previous paper published in September 2011, we constructed for the first time a dynamic model that quantitatively agreed with food prices. Specifically, the model fit the FAO Food Price Index time series from January 2004 to March 2011, inclusive. The results showed that the dominant causes of price increases during this period were investor speculation and ethanol conversion. The model included investor trend following as well as shifting between commodities, equities and bonds to take advantage of increased expected returns. Here, we extend the food prices model to January 2012, without modifying the model but simply continuing its dynamics. The agreement is still precise, validating both the descriptive and predictive abilities of the analysis. Policy actions are needed to avoid a third speculative bubble that would cause prices to rise above recent peaks by the end of 2012.

  4. Preliminary study of the influence of different modelling choices and materials properties uncertainties on the seismic assessment of an existing RC school building

    Science.gov (United States)

    Maracchini, Gianluca; Clementi, Francesco; Quagliarini, Enrico; Lenci, Stefano; Monni, Francesco

    2017-07-01

    This paper studies the influence of some aleatory and epistemic uncertainties on the seismic behaviour of an existing RC school building through a codified sensitivity analysis that uses pushover analyses and a logic tree approach. The considered epistemic uncertainties, i.e. diaphragm stiffness and modelling of stairs, seem not influencing the final assessment in term of index of seismic risk. Vice versa, aleatory ones, i.e. concrete and steel mechanical properties, strongly affect the Index. For this reason, investigations and tests should focus on the study of the mechanical properties, and, in particular, on the study of columns' concrete mechanical properties, which have the largest impact on the building seismic response.

  5. ADMET Evaluation in Drug Discovery. Part 17: Development of Quantitative and Qualitative Prediction Models for Chemical-Induced Respiratory Toxicity.

    Science.gov (United States)

    Lei, Tailong; Chen, Fu; Liu, Hui; Sun, Huiyong; Kang, Yu; Li, Dan; Li, Youyong; Hou, Tingjun

    2017-07-03

    As a dangerous end point, respiratory toxicity can cause serious adverse health effects and even death. Meanwhile, it is a common and traditional issue in occupational and environmental protection. Pharmaceutical and chemical industries have a strong urge to develop precise and convenient computational tools to evaluate the respiratory toxicity of compounds as early as possible. Most of the reported theoretical models were developed based on the respiratory toxicity data sets with one single symptom, such as respiratory sensitization, and therefore these models may not afford reliable predictions for toxic compounds with other respiratory symptoms, such as pneumonia or rhinitis. Here, based on a diverse data set of mouse intraperitoneal respiratory toxicity characterized by multiple symptoms, a number of quantitative and qualitative predictions models with high reliability were developed by machine learning approaches. First, a four-tier dimension reduction strategy was employed to find an optimal set of 20 molecular descriptors for model building. Then, six machine learning approaches were used to develop the prediction models, including relevance vector machine (RVM), support vector machine (SVM), regularized random forest (RRF), extreme gradient boosting (XGBoost), naïve Bayes (NB), and linear discriminant analysis (LDA). Among all of the models, the SVM regression model shows the most accurate quantitative predictions for the test set (q2ext = 0.707), and the XGBoost classification model achieves the most accurate qualitative predictions for the test set (MCC of 0.644, AUC of 0.893, and global accuracy of 82.62%). The application domains were analyzed, and all of the tested compounds fall within the application domain coverage. We also examined the structural features of the compounds and important fragments with large prediction errors. In conclusion, the SVM regression model and the XGBoost classification model can be employed as accurate prediction tools

  6. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 1. Theory and Methodology Based Upon Bootstrap Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Frey, H. Christopher [North Carolina State University, Raleigh, NC (United States); Rhodes, David S. [North Carolina State University, Raleigh, NC (United States)

    1999-04-30

    This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  7. Modeling real-time PCR kinetics: Richards reparametrized equation for quantitative estimation of European hake (Merluccius merluccius).

    Science.gov (United States)

    Sánchez, Ana; Vázquez, José A; Quinteiro, Javier; Sotelo, Carmen G

    2013-04-10

    Real-time PCR is the most sensitive method for detection and precise quantification of specific DNA sequences, but it is not usually applied as a quantitative method in seafood. In general, benchmark techniques, mainly cycle threshold (Ct), are the routine method for quantitative estimations, but they are not the most precise approaches for a standard assay. In the present work, amplification data from European hake (Merluccius merluccius) DNA samples were accurately modeled by three sigmoid reparametrized equations, where the lag phase parameter (λc) from the Richards equation with four parameters was demonstrated to be the perfect substitute for Ct for PCR quantification. The concentrations of primers and probes were subsequently optimized by means of that selected kinetic parameter. Finally, the linear correlation among DNA concentration and λc was also confirmed.

  8. EXIST Perspective for SFXTs

    Science.gov (United States)

    Ubertini, Pietro; Sidoli, L.; Sguera, V.; Bazzano, A.

    2009-12-01

    Supergiant Fast X-ray Transients (SFXTs) are one of the most interesting (and unexpected) results of the INTEGRAL mission. They are a new class of HMXBs displaying short hard X-ray outbursts (duration less tha a day) characterized by fast flares (few hours timescale) and large dinamic range (10E3-10E4). The physical mechanism driving their peculiar behaviour is still unclear and highly debated: some models involve the structure of the supergiant companion donor wind (likely clumpy, in a spherical or non spherical geometry) and the orbital properties (wide separation with eccentric or circular orbit), while others involve the properties of the neutron star compact object and invoke very low magnetic field values (B 1E14 G, magnetars). The picture is still highly unclear from the observational point of view as well: no cyclotron lines have been detected in the spectra, thus the strength of the neutron star magnetic field is unknown. Orbital periods have been measured in only 4 systems, spanning from 3.3 days to 165 days. Even the duty cycle seems to be quite different from source to source. The Energetic X-ray Imaging Survey Telescope (EXIST), with its hard X-ray all-sky survey and large improved limiting sensitivity, will allow us to get a clearer picture of SFXTs. A complete census of their number is essential to enlarge the sample. A long term and continuous as possible X-ray monitoring is crucial to -(1) obtain the duty cycle, -(2 )investigate their unknown orbital properties (separation, orbital period, eccentricity),- (3) to completely cover the whole outburst activity, (4)-to search for cyclotron lines in the high energy spectra. EXIST observations will provide crucial informations to test the different models and shed light on the peculiar behaviour of SFXTs.

  9. A functional-structural model of rice linking quantitative genetic information with morphological development and physiological processes.

    Science.gov (United States)

    Xu, Lifeng; Henke, Michael; Zhu, Jun; Kurth, Winfried; Buck-Sorlin, Gerhard

    2011-04-01

    Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype-phenotype model, we present here a three-dimensional functional-structural plant model (FSPM) of rice, in which some model parameters are controlled by functions describing the effect of main-effect and epistatic QTLs. The model simulates the growth and development of rice based on selected ecophysiological processes, such as photosynthesis (source process) and organ formation, growth and extension (sink processes). It was devised using GroIMP, an interactive modelling platform based on the Relational Growth Grammar formalism (RGG). RGG rules describe the course of organ initiation and extension resulting in final morphology. The link between the phenotype (as represented by the simulated rice plant) and the QTL genotype was implemented via a data interface between the rice FSPM and the QTLNetwork software, which computes predictions of QTLs from map data and measured trait data. Using plant height and grain yield, it is shown how QTL information for a given trait can be used in an FSPM, computing and visualizing the phenotypes of different lines of a mapping population. Furthermore, we demonstrate how modification of a particular trait feeds back on the entire plant phenotype via the physiological processes considered. We linked a rice FSPM to a quantitative genetic model, thereby employing QTL information to refine model parameters and visualizing the dynamics of development of the entire phenotype as a result of ecophysiological processes, including the trait(s) for which genetic information is available. Possibilities for further extension of the model, for example for the purposes of ideotype breeding, are discussed.

  10. Developing a foundation for eco-epidemiological assessment of aquatic ecological status over large geographic regions utilizing existing data resources and models.

    Science.gov (United States)

    Kapo, Katherine E; Holmes, Christopher M; Dyer, Scott D; de Zwart, Dick; Posthuma, Leo

    2014-07-01

    Eco-epidemiological studies utilizing existing monitoring program data provide a cost-effective means to bridge the gap between the ecological status and chemical status of watersheds and to develop hypotheses of stressor attribution that can influence the design of higher-tier assessments and subsequent management. The present study describes the process of combining existing data and models to develop a robust starting point for eco-epidemiological analyses of watersheds over large geographic scales. Data resources from multiple federal and local agencies representing a range of biological, chemical, physical, toxicological, and other landscape factors across the state of Ohio, USA (2000-2007), were integrated with the National Hydrography Dataset Plus hydrologic model (US Environmental Protection Agency and US Geological Survey). A variety of variable reduction, selection, and optimization strategies were applied to develop eco-epidemiological data sets for fish and macroinvertebrate communities. The relative importance of landscape variables was compared across spatial scales (local catchment, watershed, near-stream) using conditional inference forests to determine the scales most relevant to variation in biological community condition. Conditional inference forest analysis applied to a holistic set of environmental variables yielded stressor-response hypotheses at the statewide and eco-regional levels. The analysis confirmed the dominant influence of state-level stressors such as physical habitat condition, while highlighting differences in predictive strength of other stressors based on ecoregional and land-use characteristics. This exercise lays the groundwork for subsequent work designed to move closer to causal inference. © 2014 SETAC.

  11. Existence of an Intermediate Metallic Phase at the SDW-CDW Crossover Region in the One-Dimensional Holstein-Hubbard Model at Half-Filling

    Directory of Open Access Journals (Sweden)

    Ashok Chatterjee

    2010-01-01

    Full Text Available The Holstein-Hubbard model serves as a useful framework to investigate this interplay between the phonon-induced electron-electron attractive interaction and the direct Coulomb repulsion and can afford interesting phase diagrams due to competition among charge-density wave (CDW, spin-density wave (SDW, and superconductivity. However the detailed nature of the CDW-SDW transition is still not very well known. It is generally believed that the system undergoes a direct insulator to insulator transition from CDW to SDW with the increase of the on-site Coulomb repulsion for a given strength of the electron-phonon coupling and this is the main bottleneck for the polaronic/bipolaronic mechanism of high-temperature superconductivity. We have recently made an investigation to study the nature of the transition from SDW phase to CDW phase within the framework of a one-dimensional Holstein-Hubbard model at half-filling using a variational method. We find that an intervening metallic phase may exist at the crossover region of the CDW-SDW transition. We have also observed that if the anharmonicity of the phonons is taken into account, this metallic phase widens and the polarons become more mobile, which is a more favorable situation from the point of view of superconductivity. We shall finally show that an improved variational calculation widens the metallic phase and makes the polarons more mobile, which reconfirms the existence of the intermediate metallic phase at the SDW-CDW crossover region.

  12. Establishing quantitative relations between mammalian communities, climate regimes, and vegetation density - A diversity-based reference model and case study

    Science.gov (United States)

    Hertler, Christine; Wolf, Dominik; Bruch, Angela; Märker, Michael

    2013-04-01

    A considerable diversity of hominin taxa is described from the Pleistocene of sub-Saharan Africa. Inner-African range expansions of these taxa are primarily addressed by morphological comparisons of the hominin specimens and systematic interpretation of the results. Considering hominin expansion patterns as being at least co-determined by ecology and environment requires an assessment of respective features of paleo-communities as well as features of the environments with which they are associated. Challenges in validation and integration of reconstructions of hominin environments and ecologies can be met with well-organized recent reference models. Modelling the present day situation permits to assess relevant variables and to establish interactions among them on a quantitative basis. In a next step such a model can be applied to classify hominin paleoenvironments, for which not all data sources are available. An example for this approach is introduced here. In order to characterize hominin environments in sub-Saharan Africa, we assessed sets of variables for composition, structure and diversity of the large mammal communities, climate (temperature and precipitation), and vegetation in African national parks. These data are applied to analyse correlations between faunal communities and their environments on a quantitative basis. While information on large mammal communities is frequently available for hominin localities and regional climate features are addressed on the basis of abiotic proxies, information on paleoflora and vegetation is mostly lacking for the Plio-Pleistocene in sub-Saharan Africa. A quantitative reference model therefore offers new options for reconstructions. A recent reference model moreover permits to quantify descriptive terms like 'savanna'. We will introduce a reference model for sub-Saharan Africa and demonstrate its application in the reconstruction of hominin paleoenvironments. The corresponding quantitative characterization of

  13. Quantitative Evaluation of Dichloroacetic Acid Kinetics in Human -- A Physiologically-Based Pharmacokinetic Modeling Investigation

    Science.gov (United States)

    2008-01-01

    model (Keys, 2004). Not active in Human Model DCA Model: !CONSTANT KFC = 0.0 !First order metabolism rate constant (/hr /kg) !Urinary...8217 VMAX = VMAXC*BW**0.75 !KF = KFC /BW**0.25 : Proposed 1st order pathway (Keys, 2004; Not active in Human Model) Clr = Clrc*BW

  14. Hidden Markov Model for quantitative prediction of snowfall and analysis of hazardous snowfall events over Indian Himalaya

    Science.gov (United States)

    Joshi, J. C.; Tankeshwar, K.; Srivastava, Sunita

    2017-04-01

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992-2012. There are six observations and six states of the model. The most probable observation and state sequence has been computed