WorldWideScience

Sample records for modeling quantitative experiments

  1. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    Science.gov (United States)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  2. Quantitative nature of overexpression experiments

    Science.gov (United States)

    Moriya, Hisao

    2015-01-01

    Overexpression experiments are sometimes considered as qualitative experiments designed to identify novel proteins and study their function. However, in order to draw conclusions regarding protein overexpression through association analyses using large-scale biological data sets, we need to recognize the quantitative nature of overexpression experiments. Here I discuss the quantitative features of two different types of overexpression experiment: absolute and relative. I also introduce the four primary mechanisms involved in growth defects caused by protein overexpression: resource overload, stoichiometric imbalance, promiscuous interactions, and pathway modulation associated with the degree of overexpression. PMID:26543202

  3. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Science.gov (United States)

    Aihara, Ikkyu; Fujioka, Emyo; Hiryu, Shizuko

    2013-01-01

    Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon) pursued a moving moth (Goniocraspidum pryeri) in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  4. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Directory of Open Access Journals (Sweden)

    Ikkyu Aihara

    Full Text Available Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon pursued a moving moth (Goniocraspidum pryeri in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  5. Influence factors and prediction of stormwater runoff of urban green space in Tianjin, China: laboratory experiment and quantitative theory model.

    Science.gov (United States)

    Yang, Xu; You, Xue-Yi; Ji, Min; Nima, Ciren

    2013-01-01

    The effects of limiting factors such as rainfall intensity, rainfall duration, grass type and vegetation coverage on the stormwater runoff of urban green space was investigated in Tianjin. The prediction equation of stormwater runoff was established by the quantitative theory with the lab experimental data of soil columns. It was validated by three field experiments and the relative errors between predicted and measured stormwater runoff are 1.41, 1.52 and 7.35%, respectively. The results implied that the prediction equation could be used to forecast the stormwater runoff of urban green space. The results of range and variance analysis indicated the sequence order of limiting factors is rainfall intensity > grass type > rainfall duration > vegetation coverage. The least runoff of green land in the present study is the combination of rainfall intensity 60.0 mm/h, duration 60.0 min, grass Festuca arundinacea and vegetation coverage 90.0%. When the intensity and duration of rainfall are 60.0 mm/h and 90.0 min, the predicted volumetric runoff coefficient is 0.23 with Festuca arundinacea of 90.0% vegetation coverage. The present approach indicated that green space is an effective method to reduce stormwater runoff and the conclusions are mainly applicable to Tianjin and the semi-arid areas with main summer precipitation and long-time interval rainfalls.

  6. Applications of advanced kinetic collisional radiative modeling and Bremsstrahlung emission to quantitative impurity analysis on the National Spherical Torus Experiment

    Science.gov (United States)

    Muñoz Burgos, J. M.; Tritz, K.; Stutman, D.; Bell, R. E.; LeBlanc, B. P.; Sabbagh, S. A.

    2015-12-01

    An advanced kinetic collisional radiative model is used to predict beam into plasma charge-exchange visible and extreme UV (XUV ∽ 50 -700 Å ) light emission to quantify impurity density profiles on NSTX. This kinetic model is first benchmarked by predicting line-of-sight integrated emission for the visible λ = 5292.0 Å line of carbon (C VI n = 8 → 7), and comparing these predictions to absolute calibrated measurements from the active CHarge-Exchange Recombination Spectroscopy diagnostic (CHERS) on NSTX. Once benchmarked, the model is used to predict charge-exchange emission for the 182.1 Å line of carbon (C VI n = 3 → 2) that is used to scale Bremsstrahlung continuum emission in the UV/XUV region. The scaled Bremsstrahlung emission is used as a base to estimate an absolute intensity calibration curve of a XUV Transmission Grating-based Imaging Spectrometer (TGIS) diagnostic installed on the National Spherical Torus Experiment (NSTX and upgrade NSTX-U). The TGIS diagnostic operates in the wavelength region ∽ 50 -700 Å , and it is used to measure impurity spectra from charge-exchange emission. Impurity densities are estimated by fitting synthetic emission from the kinetic charge-exchange model to TGIS spectral measurements.

  7. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  8. Quantitative assessment of hyperacute cerebral infarction with intravoxel incoherent motion MR imaging: Initial experience in a canine stroke model.

    Science.gov (United States)

    Gao, Qian-Qian; Lu, Shan-Shan; Xu, Xiao-Quan; Wu, Cheng-Jiang; Liu, Xing-Long; Liu, Sheng; Shi, Hai-Bin

    2017-08-01

    To evaluate the feasibility of intravoxel incoherent motion (IVIM) for the measurement of diffusion and perfusion parameters in hyperacute strokes. An embolic ischemic model was established with an autologous thrombus in 20 beagles. IVIM imaging was performed on a 3.0 Tesla platform at 4.5 h and 6 h after embolization. Ten b values from 0 to 900 s/mm 2 were fitted with a bi-exponential model to extract perfusion fraction f, diffusion coefficient D, and pseudo-diffusion coefficient D*. Additionally, the apparent diffusion coefficient (ADC) was calculated using the mono-exponential model with all the b values. Statistical analysis was performed using the pairwise Student's t test and Pearson's correlation test. A significant decrease in f and D was observed in the ischemic area when compared with those in the contralateral side at 4.5 h and 6 h after embolization (P < 0.01 for all). No significant difference was observed in D* between the two sides at either time point (P = 0.086 and 0.336, respectively). In the stroke area, f at 6 h was significantly lower than that at 4.5 h (P = 0.016). A significantly positive correlation was detected between ADC and D in both stroke and contralateral sides at 4.5 h and 6 h (P < 0.001 for both). Significant correlation between ADC and f was only observed in the contralateral side at 4.5 h and 6 h (P = 0.019 and 0.021, respectively). IVIM imaging could simultaneously evaluate the diffusion and microvascular perfusion characteristics in hyperacute strokes. 2 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:550-556. © 2016 International Society for Magnetic Resonance in Medicine.

  9. Building a Database for a Quantitative Model

    Science.gov (United States)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  10. Quantitative structure - mesothelioma potency model ...

    Science.gov (United States)

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  11. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    on the existence of a quotient construction, allowing a property phi of a parallel system phi/A to be transformed into a sufficient and necessary quotient-property yolA to be satisfied by the component 13. Given a model checking problem involving a network Pi I and a property yo, the method gradually move (by...... quotienting) components Pi from the network into the formula co. Crucial to the success of the method is the ability to manage the size of the intermediate quotient-properties by a suitable collection of efficient minimization heuristics....

  12. Design and optimization of reverse-transcription quantitative PCR experiments.

    Science.gov (United States)

    Tichopad, Ales; Kitchen, Rob; Riedmaier, Irmgard; Becker, Christiane; Ståhlberg, Anders; Kubista, Mikael

    2009-10-01

    Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. .

  13. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  14. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made...

  15. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  16. [Teaching quantitative methods in public health: the EHESP experience].

    Science.gov (United States)

    Grimaud, Olivier; Astagneau, Pascal; Desvarieux, Moïse; Chambaud, Laurent

    2014-01-01

    Many scientific disciplines, including epidemiology and biostatistics, are used in the field of public health. These quantitative sciences are fundamental tools necessary for the practice of future professionals. What then should be the minimum quantitative sciences training, common to all future public health professionals? By comparing the teaching models developed in Columbia University and those in the National School of Public Health in France, the authors recognize the need to adapt teaching to the specific competencies required for each profession. They insist that all public health professionals, whatever their future career, should be familiar with quantitative methods in order to ensure that decision-making is based on a reflective and critical use of quantitative analysis.

  17. Expert judgement models in quantitative risk assessment

    International Nuclear Information System (INIS)

    Rosqvist, T.; Tuominen, R.

    1999-01-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed

  18. Physics of Hard Spheres Experiment: Significant and Quantitative Findings Made

    Science.gov (United States)

    Doherty, Michael P.

    2000-01-01

    Direct examination of atomic interactions is difficult. One powerful approach to visualizing atomic interactions is to study near-index-matched colloidal dispersions of microscopic plastic spheres, which can be probed by visible light. Such spheres interact through hydrodynamic and Brownian forces, but they feel no direct force before an infinite repulsion at contact. Through the microgravity flight of the Physics of Hard Spheres Experiment (PHaSE), researchers have sought a more complete understanding of the entropically driven disorder-order transition in hard-sphere colloidal dispersions. The experiment was conceived by Professors Paul M. Chaikin and William B. Russel of Princeton University. Microgravity was required because, on Earth, index-matched colloidal dispersions often cannot be density matched, resulting in significant settling over the crystallization period. This settling makes them a poor model of the equilibrium atomic system, where the effect of gravity is truly negligible. For this purpose, a customized light-scattering instrument was designed, built, and flown by the NASA Glenn Research Center at Lewis Field on the space shuttle (shuttle missions STS 83 and STS 94). This instrument performed both static and dynamic light scattering, with sample oscillation for determining rheological properties. Scattered light from a 532- nm laser was recorded either by a 10-bit charge-coupled discharge (CCD) camera from a concentric screen covering angles of 0 to 60 or by sensitive avalanche photodiode detectors, which convert the photons into binary data from which two correlators compute autocorrelation functions. The sample cell was driven by a direct-current servomotor to allow sinusoidal oscillation for the measurement of rheological properties. Significant microgravity research findings include the observation of beautiful dendritic crystals, the crystallization of a "glassy phase" sample in microgravity that did not crystallize for over 1 year in 1g

  19. Global Quantitative Modeling of Chromatin Factor Interactions

    Science.gov (United States)

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  20. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  1. Physiologically based quantitative modeling of unihemispheric sleep.

    Science.gov (United States)

    Kedziora, D J; Abeysuriya, R G; Phillips, A J K; Robinson, P A

    2012-12-07

    Unihemispheric sleep has been observed in numerous species, including birds and aquatic mammals. While knowledge of its functional role has been improved in recent years, the physiological mechanisms that generate this behavior remain poorly understood. Here, unihemispheric sleep is simulated using a physiologically based quantitative model of the mammalian ascending arousal system. The model includes mutual inhibition between wake-promoting monoaminergic nuclei (MA) and sleep-promoting ventrolateral preoptic nuclei (VLPO), driven by circadian and homeostatic drives as well as cholinergic and orexinergic input to MA. The model is extended here to incorporate two distinct hemispheres and their interconnections. It is postulated that inhibitory connections between VLPO nuclei in opposite hemispheres are responsible for unihemispheric sleep, and it is shown that contralateral inhibitory connections promote unihemispheric sleep while ipsilateral inhibitory connections promote bihemispheric sleep. The frequency of alternating unihemispheric sleep bouts is chiefly determined by sleep homeostasis and its corresponding time constant. It is shown that the model reproduces dolphin sleep, and that the sleep regimes of humans, cetaceans, and fur seals, the latter both terrestrially and in a marine environment, require only modest changes in contralateral connection strength and homeostatic time constant. It is further demonstrated that fur seals can potentially switch between their terrestrial bihemispheric and aquatic unihemispheric sleep patterns by varying just the contralateral connection strength. These results provide experimentally testable predictions regarding the differences between species that sleep bihemispherically and unihemispherically. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  3. Statistical aspects of quantitative real-time PCR experiment design.

    Science.gov (United States)

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  4. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  5. Quantitative Analysis Of Acoustic Emission From Rock Fracture Experiments

    Science.gov (United States)

    Goodfellow, Sebastian David

    This thesis aims to advance the methods of quantitative acoustic emission (AE) analysis by calibrating sensors, characterizing sources, and applying the results to solve engi- neering problems. In the first part of this thesis, we built a calibration apparatus and successfully calibrated two commercial AE sensors. The ErgoTech sensor was found to have broadband velocity sensitivity and the Panametrics V103 was sensitive to surface normal displacement. These calibration results were applied to two AE data sets from rock fracture experiments in order to characterize the sources of AE events. The first data set was from an in situ rock fracture experiment conducted at the Underground Research Laboratory (URL). The Mine-By experiment was a large scale excavation response test where both AE (10 kHz - 1 MHz) and microseismicity (MS) (1 Hz - 10 kHz) were monitored. Using the calibration information, magnitude, stress drop, dimension and energy were successfully estimated for 21 AE events recorded in the tensile region of the tunnel wall. Magnitudes were in the range -7.5 stress drops were within the range commonly observed for induced seismicity in the field (0.1 - 10 MPa). The second data set was AE collected during a true-triaxial deformation experiment, where the objectives were to characterize laboratory AE sources and identify issues related to moving the analysis from ideal in situ conditions to more complex laboratory conditions in terms of the ability to conduct quantitative AE analysis. We found AE magnitudes in the range -7.8 stress release was within the expected range of 0.1 - 10 MPa. We identified four major challenges to quantitative analysis in the laboratory, which in- hibited our ability to study parameter scaling (M0 ∝ fc -3 scaling). These challenges were 0c (1) limited knowledge of attenuation which we proved was continuously evolving, (2) the use of a narrow frequency band for acquisition, (3) the inability to identify P and S waves given the small

  6. Toward quantitative modeling of silicon phononic thermocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  7. The life review experience: Qualitative and quantitative characteristics.

    Science.gov (United States)

    Katz, Judith; Saadon-Grosman, Noam; Arzy, Shahar

    2017-02-01

    The life-review experience (LRE) is a most intriguing mental phenomenon that fascinated humans from time immemorial. In LRE one sees vividly a succession of one's own life-events. While reports of LRE are abundant in the medical, psychological and popular literature, not much is known about LRE's cognitive and psychological basis. Moreover, while LRE is known as part of the phenomenology of near-death experience, its manifestation in the general population and in other circumstances is still to be investigated. In a first step we studied the phenomenology of LRE by means of in-depth qualitative interview of 7 people who underwent full LRE. In a second step we extracted the main characters of LRE, to develop a questionnaire and an LRE-score that best reflects LRE phenomenology. This questionnaire was then run on 264 participants of diverse ages and backgrounds, and the resulted score was further subjected to statistical analyses. Qualitative analysis showed the LRE to manifest several subtypes of characteristics in terms of order, continuity, the covered period, extension to the future, valence, emotions, and perspective taking. Quantitative results in the normal population showed normal distribution of the LRE-score over participants. Re-experiencing one's own life-events, so-called LRE, is a phenomenon with well-defined characteristics, and its subcomponents may be also evident in healthy people. This suggests that a representation of life-events as a continuum exists in the cognitive system, and maybe further expressed in extreme conditions of psychological and physiological stress. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Adaptive microfluidic gradient generator for quantitative chemotaxis experiments

    Science.gov (United States)

    Anielski, Alexander; Pfannes, Eva K. B.; Beta, Carsten

    2017-03-01

    Chemotactic motion in a chemical gradient is an essential cellular function that controls many processes in the living world. For a better understanding and more detailed modelling of the underlying mechanisms of chemotaxis, quantitative investigations in controlled environments are needed. We developed a setup that allows us to separately address the dependencies of the chemotactic motion on the average background concentration and on the gradient steepness of the chemoattractant. In particular, both the background concentration and the gradient steepness can be kept constant at the position of the cell while it moves along in the gradient direction. This is achieved by generating a well-defined chemoattractant gradient using flow photolysis. In this approach, the chemoattractant is released by a light-induced reaction from a caged precursor in a microfluidic flow chamber upstream of the cell. The flow photolysis approach is combined with an automated real-time cell tracker that determines changes in the cell position and triggers movement of the microscope stage such that the cell motion is compensated and the cell remains at the same position in the gradient profile. The gradient profile can be either determined experimentally using a caged fluorescent dye or may be alternatively determined by numerical solutions of the corresponding physical model. To demonstrate the function of this adaptive microfluidic gradient generator, we compare the chemotactic motion of Dictyostelium discoideum cells in a static gradient and in a gradient that adapts to the position of the moving cell.

  9. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  10. A Quantitative Software Risk Assessment Model

    Science.gov (United States)

    Lee, Alice

    2002-01-01

    This slide presentation reviews a risk assessment model as applied to software development. the presentation uses graphs to demonstrate basic concepts of software reliability. It also discusses the application to the risk model to the software development life cycle.

  11. Quantitative Determination of Aluminum in Deodorant Brands: A Guided Inquiry Learning Experience in Quantitative Analysis Laboratory

    Science.gov (United States)

    Sedwick, Victoria; Leal, Anne; Turner, Dea; Kanu, A. Bakarr

    2018-01-01

    The monitoring of metals in commercial products is essential for protecting public health against the hazards of metal toxicity. This article presents a guided inquiry (GI) experimental lab approach in a quantitative analysis lab class that enabled students' to determine the levels of aluminum in deodorant brands. The utility of a GI experimental…

  12. MARKETING MODELS APPLICATION EXPERIENCE

    Directory of Open Access Journals (Sweden)

    A. Yu. Rymanov

    2011-01-01

    Full Text Available Marketing models are used for the assessment of such marketing elements as sales volume, market share, market attractiveness, advertizing costs, product pushing and selling, profit, profitableness. Classification of buying process decision taking models is presented. SWOT- and GAPbased models are best for selling assessments. Lately, there is a tendency to transfer from the assessment on the ba-sis of financial indices to that on the basis of those non-financial. From the marketing viewpoint, most important are long-term company activities and consumer drawingmodels as well as market attractiveness operative models.

  13. Modelling Urban Experiences

    DEFF Research Database (Denmark)

    Jantzen, Christian; Vetner, Mikael

    2008-01-01

    How can urban designers develop an emotionally satisfying environment not only for today's users but also for coming generations? Which devices can they use to elicit interesting and relevant urban experiences? This paper attempts to answer these questions by analyzing the design of Zuidas, a new...

  14. Quantitative Experiments to Explain the Change of Seasons

    Science.gov (United States)

    Testa, Italo; Busarello, Gianni; Puddu, Emanuella; Leccia, Silvio; Merluzzi, Paola; Colantonio, Arturo; Moretti, Maria Ida; Galano, Silvia; Zappia, Alessandro

    2015-01-01

    The science education literature shows that students have difficulty understanding what causes the seasons. Incorrect explanations are often due to a lack of knowledge about the physical mechanisms underlying this phenomenon. To address this, we present a module in which the students engage in quantitative measurements with a photovoltaic panel to…

  15. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  16. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  17. The Design of a Quantitative Western Blot Experiment

    Directory of Open Access Journals (Sweden)

    Sean C. Taylor

    2014-01-01

    Full Text Available Western blotting is a technique that has been in practice for more than three decades that began as a means of detecting a protein target in a complex sample. Although there have been significant advances in both the imaging and reagent technologies to improve sensitivity, dynamic range of detection, and the applicability of multiplexed target detection, the basic technique has remained essentially unchanged. In the past, western blotting was used simply to detect a specific target protein in a complex mixture, but now journal editors and reviewers are requesting the quantitative interpretation of western blot data in terms of fold changes in protein expression between samples. The calculations are based on the differential densitometry of the associated chemiluminescent and/or fluorescent signals from the blots and this now requires a fundamental shift in the experimental methodology, acquisition, and interpretation of the data. We have recently published an updated approach to produce quantitative densitometric data from western blots (Taylor et al., 2013 and here we summarize the complete western blot workflow with a focus on sample preparation and data analysis for quantitative western blotting.

  18. The design of a quantitative western blot experiment.

    Science.gov (United States)

    Taylor, Sean C; Posch, Anton

    2014-01-01

    Western blotting is a technique that has been in practice for more than three decades that began as a means of detecting a protein target in a complex sample. Although there have been significant advances in both the imaging and reagent technologies to improve sensitivity, dynamic range of detection, and the applicability of multiplexed target detection, the basic technique has remained essentially unchanged. In the past, western blotting was used simply to detect a specific target protein in a complex mixture, but now journal editors and reviewers are requesting the quantitative interpretation of western blot data in terms of fold changes in protein expression between samples. The calculations are based on the differential densitometry of the associated chemiluminescent and/or fluorescent signals from the blots and this now requires a fundamental shift in the experimental methodology, acquisition, and interpretation of the data. We have recently published an updated approach to produce quantitative densitometric data from western blots (Taylor et al., 2013) and here we summarize the complete western blot workflow with a focus on sample preparation and data analysis for quantitative western blotting.

  19. A Quantitative Model of Expert Transcription Typing

    Science.gov (United States)

    1993-03-08

    1-3), how degradation of the text away from normal prose affects the rate of typing (phenomena 4-6), patterns of interkey intervals (phenomena 7-11...A more detailed analysis of this phenomenon is based on the work of West and Sabban (1932). They used progressively degraded copy to test "the...company: Analytic modelling applied to real-world problems. In D. Diaper , D. Gilmore, G. Cockton, & B. Shackel (Eds.). Human-Computer Interaction INTERACT

  20. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    OpenAIRE

    Cobbs, Gary

    2012-01-01

    Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most pote...

  1. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    The majority of modern software and hardware systems are reactive systems, where input provided by the user (possibly another system) and the output of the system is exchanged continuously throughout the (possibly) indefinite execution of the system. Natural examples include control systems, mobi......, energy consumption, latency, mean-time to failure, and cost. For systems integrated in mass-market products, the ability to quantify trade-offs between performance and robustness, under given technical and economic constraints, is of strategic importance....... by the environment in which they are embedded. This thesis studies the semantics and properties of a model-based framework for re- active systems, in which models and specifications are assumed to contain quantifiable information, such as references to time or energy. Our goal is to develop a theory of approximation......, in terms of a new mathematical basis for systems modeling which can incompas behavioural properties as well as environmental constraints. They continue by pointing out that, continuous performance and robustness measures are paramount when dealing with physical resource levels such as clock frequency...

  2. Quantitative comparisons of analogue models of brittle wedge dynamics

    Science.gov (United States)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  3. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  4. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  5. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  6. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  7. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  8. Measures of morally injurious experiences: A quantitative comparison.

    Science.gov (United States)

    Lancaster, Steven L; Irene Harris, J

    2018-03-28

    A recent body of literature has examined the psychological effects of perpetrating or failing to prevent acts that violate one's sense of right and wrong. The objective of this study was to examine and compare correlations between the two most widely used instruments measuring this construct in a sample of military veterans and relevant psychosocial variables. Individuals (N = 182) who reported military combat experience completed the Moral Injury Events Scale and the Moral Injury Questionnaire-Military Version, along with measures of combat exposure, depression, posttraumatic stress disorder, alcohol concerns, anger, guilt, and shame. Results indicate similar correlations between the morally injurious experiences instruments and negative psychosocial variables, but different correlations with combat exposure. Implications for further research in the conceptualization and treatment of morally injurious experiences are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Generalized PSF modeling for optimized quantitation in PET imaging

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman

    2017-06-01

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  10. Statistical aspects of quantitative real-time PCR experiment design

    Czech Academy of Sciences Publication Activity Database

    Kitchen, R.R.; Kubista, Mikael; Tichopád, Aleš

    2010-01-01

    Roč. 50, č. 4 (2010), s. 231-236 ISSN 1046-2023 R&D Projects: GA AV ČR IAA500520809 Institutional research plan: CEZ:AV0Z50520701 Keywords : Real-time PCR * Experiment design * Nested analysis of variance Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 4.527, year: 2010

  11. Quantitative modelling in design and operation of food supply systems

    NARCIS (Netherlands)

    Beek, van P.

    2004-01-01

    During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and

  12. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage

  13. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  14. Quantitative and logic modelling of gene and molecular networks

    Science.gov (United States)

    Le Novère, Nicolas

    2015-01-01

    Behaviours of complex biomolecular systems are often irreducible to the elementary properties of their individual components. Explanatory and predictive mathematical models are therefore useful for fully understanding and precisely engineering cellular functions. The development and analyses of these models require their adaptation to the problems that need to be solved and the type and amount of available genetic or molecular data. Quantitative and logic modelling are among the main methods currently used to model molecular and gene networks. Each approach comes with inherent advantages and weaknesses. Recent developments show that hybrid approaches will become essential for further progress in synthetic biology and in the development of virtual organisms. PMID:25645874

  15. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    Science.gov (United States)

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  16. Quantitative comparison of canopy conductance models using a Bayesian approach

    Science.gov (United States)

    Samanta, S.; Clayton, M. K.; Mackay, D. S.; Kruger, E. L.; Ewers, B. E.

    2008-09-01

    A quantitative model comparison methodology based on deviance information criterion, a Bayesian measure of the trade-off between model complexity and goodness of fit, is developed and demonstrated by comparing semiempirical transpiration models. This methodology accounts for parameter and prediction uncertainties associated with such models and facilitates objective selection of the simplest model, out of available alternatives, which does not significantly compromise the ability to accurately model observations. We use this methodology to compare various Jarvis canopy conductance model configurations, embedded within a larger transpiration model, against canopy transpiration measured by sap flux. The results indicate that descriptions of the dependence of stomatal conductance on vapor pressure deficit, photosynthetic radiation, and temperature, as well as the gradual variation in canopy conductance through the season are essential in the transpiration model. Use of soil moisture was moderately significant, but only when used with a hyperbolic vapor pressure deficit relationship. Subtle differences in model quality could be clearly associated with small structural changes through the use of this methodology. The results also indicate that increments in model complexity are not always accompanied by improvements in model quality and that such improvements are conditional on model structure. Possible application of this methodology to compare complex semiempirical models of natural systems in general is also discussed.

  17. Quantitative versus qualitative modeling: a complementary approach in ecosystem study.

    Science.gov (United States)

    Bondavalli, C; Favilla, S; Bodini, A

    2009-02-01

    Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.

  18. A Review on Quantitative Models for Sustainable Food Logistics Management

    Directory of Open Access Journals (Sweden)

    M. Soysal

    2012-12-01

    Full Text Available The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM. A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics.

  19. Genomic value prediction for quantitative traits under the epistatic model

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-01-01

    Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.

  20. A Transformative Model for Undergraduate Quantitative Biology Education

    Science.gov (United States)

    Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  1. A transformative model for undergraduate quantitative biology education.

    Science.gov (United States)

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  2. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... of biovariability; 2) testing and developing robust techniques and tools as a prerequisite to conducting physiological investigations; 3) defining and adopting standards to facilitate the interoperability of experiments, models, and simulations; 4) and understanding physiological validation as an iterative process...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  3. QuantUM: Quantitative Safety Analysis of UML Models

    Directory of Open Access Journals (Sweden)

    Florian Leitner-Fischer

    2011-07-01

    Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.

  4. Debris flows: Experiments and modelling

    Science.gov (United States)

    Turnbull, Barbara; Bowman, Elisabeth T.; McElwaine, Jim N.

    2015-01-01

    Debris flows and debris avalanches are complex, gravity-driven currents of rock, water and sediments that can be highly mobile. This combination of component materials leads to a rich morphology and unusual dynamics, exhibiting features of both granular materials and viscous gravity currents. Although extreme events such as those at Kolka Karmadon in North Ossetia (2002) [1] and Huascarán (1970) [2] strongly motivate us to understand how such high levels of mobility can occur, smaller events are ubiquitous and capable of endangering infrastructure and life, requiring mitigation. Recent progress in modelling debris flows has seen the development of multiphase models that can start to provide clues of the origins of the unique phenomenology of debris flows. However, the spatial and temporal variations that debris flows exhibit make this task challenging and laboratory experiments, where boundary and initial conditions can be controlled and reproduced, are crucial both to validate models and to inspire new modelling approaches. This paper discusses recent laboratory experiments on debris flows and the state of the art in numerical models.

  5. Quantitative analysis of a wind energy conversion model

    International Nuclear Information System (INIS)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-01-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s −1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is c p = 0.15. The v 3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively. (paper)

  6. Towards Quantitative Systems Pharmacology Models of Chemotherapy-Induced Neutropenia.

    Science.gov (United States)

    Craig, M

    2017-05-01

    Neutropenia is a serious toxic complication of chemotherapeutic treatment. For years, mathematical models have been developed to better predict hematological outcomes during chemotherapy in both the traditional pharmaceutical sciences and mathematical biology disciplines. An increasing number of quantitative systems pharmacology (QSP) models that combine systems approaches, physiology, and pharmacokinetics/pharmacodynamics have been successfully developed. Here, I detail the shift towards QSP efforts, emphasizing the importance of incorporating systems-level physiological considerations in pharmacometrics. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  7. Quantitative analysis of a wind energy conversion model

    Science.gov (United States)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-03-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s-1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is cp = 0.15. The v3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively.

  8. Frequency-Domain Response Analysis for Quantitative Systems Pharmacology Models.

    Science.gov (United States)

    Schulthess, Pascal; Post, Teun M; Yates, James; van der Graaf, Piet H

    2017-11-28

    Drug dosing regimen can significantly impact drug effect and, thus, the success of treatments. Nevertheless, trial and error is still the most commonly used method by conventional pharmacometric approaches to optimize dosing regimen. In this tutorial, we utilize four distinct classes of quantitative systems pharmacology models to introduce frequency-domain response analysis, a method widely used in electrical and control engineering that allows the analytical optimization of drug treatment regimen from the dynamics of the model. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  9. From classical genetics to quantitative genetics to systems biology: modeling epistasis.

    Directory of Open Access Journals (Sweden)

    David L Aylor

    2008-03-01

    Full Text Available Gene expression data has been used in lieu of phenotype in both classical and quantitative genetic settings. These two disciplines have separate approaches to measuring and interpreting epistasis, which is the interaction between alleles at different loci. We propose a framework for estimating and interpreting epistasis from a classical experiment that combines the strengths of each approach. A regression analysis step accommodates the quantitative nature of expression measurements by estimating the effect of gene deletions plus any interaction. Effects are selected by significance such that a reduced model describes each expression trait. We show how the resulting models correspond to specific hierarchical relationships between two regulator genes and a target gene. These relationships are the basic units of genetic pathways and genomic system diagrams. Our approach can be extended to analyze data from a variety of experiments, multiple loci, and multiple environments.

  10. Quantitative insight into models of Hedgehog signal transduction.

    Science.gov (United States)

    Farzan, Shohreh F; Ogden, Stacey K; Robbins, David J

    2010-01-01

    The Hedgehog (Hh) signaling pathway is an essential regulator of embryonic development and a key factor in carcinogenesis.(1,2) Hh, a secreted morphogen, activates intracellular signaling events via downstream effector proteins, which translate the signal to regulate target gene transcription.(3,4) In a recent publication, we quantitatively compared two commonly accepted models of Hh signal transduction.(5) Each model requires a different ratio of signaling components to be feasible. Thus, we hypothesized that knowing the steady-state ratio of core signaling components might allow us to distinguish between models. We reported vast differences in the molar concentrations of endogenous effectors of Hh signaling, with Smo present in limiting concentrations.(5) This extra view summarizes the implications of this endogenous ratio in relation to current models of Hh signaling and places our results in the context of recent work describing the involvement of guanine nucleotide binding protein Galphai and Cos2 motility.

  11. A Colorimetric Analysis Experiment Not Requiring a Spectrophotometer: Quantitative Determination of Albumin in Powdered Egg White

    Science.gov (United States)

    Charlton, Amanda K.; Sevcik, Richard S.; Tucker, Dorie A.; Schultz, Linda D.

    2007-01-01

    A general science experiment for high school chemistry students might serve as an excellent review of the concepts of solution preparation, solubility, pH, and qualitative and quantitative analysis of a common food product. The students could learn to use safe laboratory techniques, collect and analyze data using proper scientific methodology and…

  12. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  13. Quantitative aspects and dynamic modelling of glucosinolate metabolism

    DEFF Research Database (Denmark)

    Vik, Daniel

    and ecologically important glucosinolate (GLS) compounds of cruciferous plants – including the model plant Arabidopsis thaliana – have been studied extensively with regards to their biosynthesis and degradation. However, efforts to construct a dynamic model unifying the regulatory aspects have not been made......Advancements in ‘omics technologies now allow acquisition of enormous amounts of quantitative information about biomolecules. This has led to the emergence of new scientific sub‐disciplines e.g. computational, systems and ‘quantitative’ biology. These disciplines examine complex biological...... behaviour through computational and mathematical approaches and have resulted in substantial insights and advances in molecular biology and physiology. Capitalizing on the accumulated knowledge and data, it is possible to construct dynamic models of complex biological systems, thereby initiating the so...

  14. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  15. Quantifying Zika: Advancing the Epidemiology of Zika With Quantitative Models.

    Science.gov (United States)

    Keegan, Lindsay T; Lessler, Justin; Johansson, Michael A

    2017-12-16

    When Zika virus (ZIKV) emerged in the Americas, little was known about its biology, pathogenesis, and transmission potential, and the scope of the epidemic was largely hidden, owing to generally mild infections and no established surveillance systems. Surges in congenital defects and Guillain-Barré syndrome alerted the world to the danger of ZIKV. In the context of limited data, quantitative models were critical in reducing uncertainties and guiding the global ZIKV response. Here, we review some of the models used to assess the risk of ZIKV-associated severe outcomes, the potential speed and size of ZIKV epidemics, and the geographic distribution of ZIKV risk. These models provide important insights and highlight significant unresolved questions related to ZIKV and other emerging pathogens. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  16. Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H.E.; Schober, H.; Gonzalez, M.A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F.J.; Fayos, R.; Dawidowski, J. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M.A.; Vieira, S. [Universidad Autonoma de Madrid (Spain)

    1997-04-01

    The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.

  17. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  18. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  19. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  20. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    2000-07-01

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  1. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  2. Accounting for genetic interactions improves modeling of individual quantitative trait phenotypes in yeast.

    Science.gov (United States)

    Forsberg, Simon K G; Bloom, Joshua S; Sadhu, Meru J; Kruglyak, Leonid; Carlborg, Örjan

    2017-04-01

    Experiments in model organisms report abundant genetic interactions underlying biologically important traits, whereas quantitative genetics theory predicts, and data support, the notion that most genetic variance in populations is additive. Here we describe networks of capacitating genetic interactions that contribute to quantitative trait variation in a large yeast intercross population. The additive variance explained by individual loci in a network is highly dependent on the allele frequencies of the interacting loci. Modeling of phenotypes for multilocus genotype classes in the epistatic networks is often improved by accounting for the interactions. We discuss the implications of these results for attempts to dissect genetic architectures and to predict individual phenotypes and long-term responses to selection.

  3. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    Directory of Open Access Journals (Sweden)

    David Stephens

    Full Text Available There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method.

  4. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  5. Quantitative Modelling of Trace Elements in Hard Coal.

    Science.gov (United States)

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  6. Experiments beyond the standard model

    International Nuclear Information System (INIS)

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references

  7. Melanoma screening: Informing public health policy with quantitative modelling.

    Directory of Open Access Journals (Sweden)

    Stephen Gilmore

    Full Text Available Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in

  8. First principles pharmacokinetic modeling: A quantitative study on Cyclosporin

    DEFF Research Database (Denmark)

    Mošat', Andrej; Lueshen, Eric; Heitzig, Martina

    2013-01-01

    renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...

  9. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  10. Incorporation of caffeine into a quantitative model of fatigue and sleep.

    Science.gov (United States)

    Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A

    2011-03-21

    A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Modeling the Experience of Emotion

    OpenAIRE

    Broekens, Joost

    2009-01-01

    Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers resulting in work that is widely published. The majority of this work consists of computational models of emotion recognition, computational modeling of causal factors of emotion and emotion expression through rendered and robotic faces. A smaller part is concerned with modeling the effects of emotion, formal modeling of cognitive appraisal theory and models of emergent...

  12. CSML2SBML: a novel tool for converting quantitative biological pathway models from CSML into SBML.

    Science.gov (United States)

    Li, Chen; Nagasaki, Masao; Ikeda, Emi; Sekiya, Yayoi; Miyano, Satoru

    2014-07-01

    CSML and SBML are XML-based model definition standards which are developed with the aim of creating exchange formats for modeling, visualizing and simulating biological pathways. In this article we report a release of a format convertor for quantitative pathway models, namely CSML2SBML. It translates models encoded by CSML into SBML without loss of structural and kinetic information. The simulation and parameter estimation of the resulting SBML model can be carried out with compliant tool CellDesigner for further analysis. The convertor is based on the standards CSML version 3.0 and SBML Level 2 Version 4. In our experiments, 11 out of 15 pathway models in CSML model repository and 228 models in Macrophage Pathway Knowledgebase (MACPAK) are successfully converted to SBML models. The consistency of the resulting model is validated by libSBML Consistency Check of CellDesigner. Furthermore, the converted SBML model assigned with the kinetic parameters translated from CSML model can reproduce the same dynamics with CellDesigner as CSML one running on Cell Illustrator. CSML2SBML, along with its instructions and examples for use are available at http://csml2sbml.csml.org. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  14. The impact of negative childbirth experience on future reproductive decisions: A quantitative systematic review.

    Science.gov (United States)

    Shorey, Shefaly; Yang, Yen Yen; Ang, Emily

    2018-02-02

    The aim of this study was to systematically retrieve, critique and synthesize available evidence regarding the association between negative childbirth experiences and future reproductive decisions. A child's birth is often a joyous event; however, there is a proportion of women who undergo negative childbirth experiences that have long-term implications on their reproductive decisions. A systematic review of quantitative studies was undertaken using Joanna Briggs Institute's methods. A search was carried out in CINAHL Plus with Full Text, Embase, PsycINFO, PubMed, Scopus and Web of Science from January 1996 - July 2016. Studies that fulfilled the inclusion criteria were assessed by two independent reviewers using the Joanna Briggs Institute's Critical Appraisal Tools. Data were extracted under subheadings adapted from the institute's data extraction forms. Twelve studies, which examined either one or more influences of negative childbirth experiences, were identified. The included studies were either cohort or cross-sectional designs. Five studies observed positive associations between prior negative childbirth experiences and decisions to not have another child, three studies found positive associations between negative childbirth experiences and decisions to delay a subsequent birth and six studies concluded positive associations between negative childbirth experiences and maternal requests for caesarean section in subsequent pregnancies. To receive a holistic understanding on negative childbirth experiences, a suitable definition and validated measuring tools should be used to understand this phenomenon. Future studies or reviews should include a qualitative component and/or the exploration of specific factors such as cultural and regional differences that influence childbirth experiences. © 2018 John Wiley & Sons Ltd.

  15. Determination of Calcium in Cereal with Flame Atomic Absorption Spectroscopy: An Experiment for a Quantitative Methods of Analysis Course

    Science.gov (United States)

    Bazzi, Ali; Kreuz, Bette; Fischer, Jeffrey

    2004-01-01

    An experiment for determination of calcium in cereal using two-increment standard addition method in conjunction with flame atomic absorption spectroscopy (FAAS) is demonstrated. The experiment is intended to introduce students to the principles of atomic absorption spectroscopy giving them hands on experience using quantitative methods of…

  16. Being a quantitative interviewer: qualitatively exploring interviewers' experiences in a longitudinal cohort study

    Directory of Open Access Journals (Sweden)

    Derrett Sarah

    2011-12-01

    Full Text Available Abstract Background Many studies of health outcomes rely on data collected by interviewers administering highly-structured (quantitative questionnaires to participants. Little appears to be known about the experiences of such interviewers. This paper explores interviewer experiences of working on a longitudinal study in New Zealand (the Prospective Outcomes of injury Study - POIS. Interviewers administer highly-structured questionnaires to participants, usually by telephone, and enter data into a secure computer program. The research team had expectations of interviewers including: consistent questionnaire administration, timeliness, proportions of potential participants recruited and an empathetic communication style. This paper presents results of a focus group to qualitatively explore with the team of interviewers their experiences, problems encountered, strategies, support systems used and training. Methods A focus group with interviewers involved in the POIS interviews was held; it was audio-recorded and transcribed. The analytical method was thematic, with output intended to be descriptive and interpretive. Results Nine interviewers participated in the focus group (average time in interviewer role was 31 months. Key themes were: 1 the positive aspects of the quantitative interviewer role (i.e. relationships and resilience, insights gained, and participants' feedback, 2 difficulties interviewers encountered and solutions identified (i.e. stories lost or incomplete, forgotten appointments, telling the stories, acknowledging distress, stories reflected and debriefing and support, and 3 meeting POIS researcher expectations (i.e. performance standards, time-keeping, dealing exclusively with the participant and maintaining privacy. Conclusions Interviewers demonstrated great skill in the way they negotiated research team expectations whilst managing the relationships with participants. Interviewers found it helpful to have a research protocol in

  17. A Transformative Model for Undergraduate Quantitative Biology Education

    OpenAIRE

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematic...

  18. A theoretical quantitative model for evolution of cancer chemotherapy resistance

    Directory of Open Access Journals (Sweden)

    Gatenby Robert A

    2010-04-01

    Full Text Available Abstract Background Disseminated cancer remains a nearly uniformly fatal disease. While a number of effective chemotherapies are available, tumors inevitably evolve resistance to these drugs ultimately resulting in treatment failure and cancer progression. Causes for chemotherapy failure in cancer treatment reside in multiple levels: poor vascularization, hypoxia, intratumoral high interstitial fluid pressure, and phenotypic resistance to drug-induced toxicity through upregulated xenobiotic metabolism or DNA repair mechanisms and silencing of apoptotic pathways. We propose that in order to understand the evolutionary dynamics that allow tumors to develop chemoresistance, a comprehensive quantitative model must be used to describe the interactions of cell resistance mechanisms and tumor microenvironment during chemotherapy. Ultimately, the purpose of this model is to identify the best strategies to treat different types of tumor (tumor microenvironment, genetic/phenotypic tumor heterogeneity, tumor growth rate, etc.. We predict that the most promising strategies are those that are both cytotoxic and apply a selective pressure for a phenotype that is less fit than that of the original cancer population. This strategy, known as double bind, is different from the selection process imposed by standard chemotherapy, which tends to produce a resistant population that simply upregulates xenobiotic metabolism. In order to achieve this goal we propose to simulate different tumor progression and therapy strategies (chemotherapy and glucose restriction targeting stabilization of tumor size and minimization of chemoresistance. Results This work confirms the prediction of previous mathematical models and simulations that suggested that administration of chemotherapy with the goal of tumor stabilization instead of eradication would yield better results (longer subject survival than the use of maximum tolerated doses. Our simulations also indicate that the

  19. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...

  20. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Maurice H. ter Beek

    2015-04-01

    Full Text Available We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLan with action rates, which specify the likelihood of exhibiting particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLan allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLan semantics based on discrete-time Markov chains. The Maude implementation of PFLan is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning and the expected average cost of products.

  1. Herd immunity and pneumococcal conjugate vaccine: a quantitative model.

    Science.gov (United States)

    Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S

    2007-07-20

    Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non

  2. Numerical experiments modelling turbulent flows

    Science.gov (United States)

    Trefilík, Jiří; Kozel, Karel; Příhoda, Jaromír

    2014-03-01

    The work aims at investigation of the possibilities of modelling transonic flows mainly in external aerodynamics. New results are presented and compared with reference data and previously achieved results. For the turbulent flow simulations two modifications of the basic k - ω model are employed: SST and TNT. The numerical solution was achieved by using the MacCormack scheme on structured non-ortogonal grids. Artificial dissipation was added to improve the numerical stability.

  3. Numerical experiments modelling turbulent flows

    Directory of Open Access Journals (Sweden)

    Trefilík Jiří

    2014-03-01

    Full Text Available The work aims at investigation of the possibilities of modelling transonic flows mainly in external aerodynamics. New results are presented and compared with reference data and previously achieved results. For the turbulent flow simulations two modifications of the basic k – ω model are employed: SST and TNT. The numerical solution was achieved by using the MacCormack scheme on structured non-ortogonal grids. Artificial dissipation was added to improve the numerical stability.

  4. Quantitative experiment of unsaturated water vadose through two-layer porous media

    International Nuclear Information System (INIS)

    Wang Zhiming; Yao Laigen; Jiang Hong; Li Shushen

    2003-01-01

    It is of very important significance to understand unsaturated water vadose through two-layer porous media in design of cover of near surface repository of radioactive waste. The device, method and results of the quantitative experiment of unsaturated water vadose through two-layer porous media, made up of loess (fine particle layer) and quartz sand (coarse particle layer), are introduced in this paper. It can be seen from the experiment that the detouring flow phenomenon of infiltration water occurred when the infiltrated unsaturated water from loess encounters quartz sand layer even though the quartz sand layer is very thin. The relative detouring flow amount decreases with increase of sprinkling rate and increases with thickness of quartz sand layer. Moreover, it is found from the experiment that some of detouring flow water moves close to lower surface of the quartz sand layer. From the deduced results by this experiment, it can be seen that the thickness of quartz sand layer, by which detouring flow do not happen, is less than or equal to 1 mm and the sprinkling rate, at which relative detouring flow amount is up to 100%, is less than 5 mm/d when thickness of the quartz sand layer is greater than or equal to 2 mm

  5. Director gliding in a nematic liquid crystal layer: Quantitative comparison with experiments

    Science.gov (United States)

    Mema, E.; Kondic, L.; Cummings, L. J.

    2018-03-01

    The interaction between nematic liquid crystals and polymer-coated substrates may lead to slow reorientation of the easy axis (so-called "director gliding") when a prolonged external field is applied. We consider the experimental evidence of zenithal gliding observed by Joly et al. [Phys. Rev. E 70, 050701 (2004), 10.1103/PhysRevE.70.050701] and Buluy et al. [J. Soc. Inf. Disp. 14, 603 (2006), 10.1889/1.2235686] as well as azimuthal gliding observed by S. Faetti and P. Marianelli [Liq. Cryst. 33, 327 (2006), 10.1080/02678290500512227], and we present a simple, physically motivated model that captures the slow dynamics of gliding, both in the presence of an electric field and after the electric field is turned off. We make a quantitative comparison of our model results and the experimental data and conclude that our model explains the gliding evolution very well.

  6. An energetic model for macromolecules unfolding in stretching experiments

    Science.gov (United States)

    De Tommasi, D.; Millardi, N.; Puglisi, G.; Saccomandi, G.

    2013-01-01

    We propose a simple approach, based on the minimization of the total (entropic plus unfolding) energy of a two-state system, to describe the unfolding of multi-domain macromolecules (proteins, silks, polysaccharides, nanopolymers). The model is fully analytical and enlightens the role of the different energetic components regulating the unfolding evolution. As an explicit example, we compare the analytical results with a titin atomic force microscopy stretch-induced unfolding experiment showing the ability of the model to quantitatively reproduce the experimental behaviour. In the thermodynamic limit, the sawtooth force–elongation unfolding curve degenerates to a constant force unfolding plateau. PMID:24047874

  7. Experience economy meets business model design

    DEFF Research Database (Denmark)

    Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig

    2012-01-01

    companies automatically get a higher prize when offering an experience setting to the customer illustrated by the coffee example. Organizations that offer experiences still have an advantage but when an increasing number of organizations enter the experience economy the competition naturally gets tougher......Through the last decade the experience economy has found solid ground and manifested itself as a parameter where business and organizations can differentiate from competitors. The fundamental premise is the one found in Pine & Gilmores model from 1999 over 'the progression of economic value' where...... produced, designed or staged experience that gains the most profit or creates return of investment. It becomes more obvious that other parameters in the future can be a vital part of the experience economy and one of these is business model innovation. Business model innovation is about continuous...

  8. Quantitative evaluation of ultrasonic sound fields in anisotropic austenitic welds using 2D ray tracing model

    Science.gov (United States)

    Kolkoori, S. R.; Rahaman, M.-U.; Chinta, P. K.; Kreutzbruck, M.; Prager, J.

    2012-05-01

    Ultrasonic investigation of inhomogeneous anisotropic materials such as austenitic welds is complicated because its columnar grain structure leads to curved energy paths, beam splitting and asymmetrical beam profiles. A ray tracing model has potential advantage in analyzing the ultrasonic sound field propagation and there with optimizing the inspection parameters. In this contribution we present a 2D ray tracing model to predict energy ray paths, ray amplitudes and travel times for the three wave modes quasi longitudinal, quasi shear vertical, and shear horizontal waves in austenitic weld materials. Inhomogenity in the austenitic weld material is represented by discretizing the inhomogeneous region into several homogeneous layers. At each interface between the layers the reflection and transmission problem is computed and yields energy direction, amplitude and energy coefficients. The ray amplitudes are computed accurately by taking into account directivity, divergence and density of rays, phase relations as well as transmission coefficients. Ultrasonic sound fields obtained from the ray tracing model are compared quantitatively with the 2D Elastodynamic Finite Integration Technique (EFIT). The excellent agreement between both models confirms the validity of the presented ray tracing results. Experiments are conducted on austenitic weld samples with longitudinal beam transducer as transmitting probe and amplitudes at the rear surface are scanned by means of electrodynamical probes. Finally, the ray tracing model results are also validated through the experiments.

  9. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  10. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  11. Interrater reliability of quantitative ultrasound using force feedback among examiners with varied levels of experience

    Directory of Open Access Journals (Sweden)

    Michael O. Harris-Love

    2016-06-01

    Full Text Available Background. Quantitative ultrasound measures are influenced by multiple external factors including examiner scanning force. Force feedback may foster the acquisition of reliable morphometry measures under a variety of scanning conditions. The purpose of this study was to determine the reliability of force-feedback image acquisition and morphometry over a range of examiner-generated forces using a muscle tissue-mimicking ultrasound phantom. Methods. Sixty material thickness measures were acquired from a muscle tissue mimicking phantom using B-mode ultrasound scanning by six examiners with varied experience levels (i.e., experienced, intermediate, and novice. Estimates of interrater reliability and measurement error with force feedback scanning were determined for the examiners. In addition, criterion-based reliability was determined using material deformation values across a range of examiner scanning forces (1–10 Newtons via automated and manually acquired image capture methods using force feedback. Results. All examiners demonstrated acceptable interrater reliability (intraclass correlation coefficient, ICC = .98, p .90, p < .001, independent of their level of experience. The measurement error among all examiners was 1.5%–2.9% across all applied stress conditions. Conclusion. Manual image capture with force feedback may aid the reliability of morphometry measures across a range of examiner scanning forces, and allow for consistent performance among examiners with differing levels of experience.

  12. Quantitative analysis of prediction models for hot cracking in ...

    Indian Academy of Sciences (India)

    A RodrМguez-Prieto

    2017-11-16

    Nov 16, 2017 ... enhancing safety margins and adding greater precision to quantitative accident prediction [45]. One deterministic methodology is the stringency level (SL) approach, which is recognized as a valuable decision tool in the selection of standardized materials specifications to prevent potential failures [3].

  13. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    forecasting of quantitative snowfall at 10 meteoro- logical stations in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. At these stations of Snow and Avalanche Study Estab- lishment (SASE), snow and meteorological data are recorded twice daily at 08:30 and 17:30 hrs since more than last four decades ...

  14. A Transformative Model for Undergraduate Quantitative Biology Education

    Science.gov (United States)

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…

  15. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy......Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  16. ASSETS MANAGEMENT - A CONCEPTUAL MODEL DECOMPOSING VALUE FOR THE CUSTOMER AND A QUANTITATIVE MODEL

    Directory of Open Access Journals (Sweden)

    Susana Nicola

    2015-03-01

    Full Text Available In this paper we describe de application of a modeling framework, the so-called Conceptual Model Decomposing Value for the Customer (CMDVC, in a Footwear Industry case study, to ascertain the usefulness of this approach. The value networks were used to identify the participants, both tangible and intangible deliverables/endogenous and exogenous assets, and the analysis of their interactions as the indication for an adequate value proposition. The quantitative model of benefits and sacrifices, using the Fuzzy AHP method, enables the discussion of how the CMDVC can be applied and used in the enterprise environment and provided new relevant relations between perceived benefits (PBs.

  17. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions

    Directory of Open Access Journals (Sweden)

    Frederick S. Barrett

    2017-07-01

    Full Text Available Psilocybin is a classic (serotonergic hallucinogen (“psychedelic” drug that may occasion mystical experiences (characterized by a profound feeling of oneness or unity during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli and quantitative analysis (using signal processing and music-information retrieval methods of 22 of these stimuli yielded a description of peak period music that was characterized by regular

  18. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions

    Science.gov (United States)

    Barrett, Frederick S.; Robbins, Hollis; Smooke, David; Brown, Jenine L.; Griffiths, Roland R.

    2017-01-01

    Psilocybin is a classic (serotonergic) hallucinogen (“psychedelic” drug) that may occasion mystical experiences (characterized by a profound feeling of oneness or unity) during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli) and quantitative analysis (using signal processing and music-information retrieval methods) of 22 of these stimuli yielded a description of peak period music that was characterized by regular, predictable

  19. Modeling a High Explosive Cylinder Experiment

    Science.gov (United States)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  20. Modeling Choice and Valuation in Decision Experiments

    Science.gov (United States)

    Loomes, Graham

    2010-01-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for…

  1. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe......Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes...

  2. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    Science.gov (United States)

    Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W

    2016-11-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  3. Quartz Crystal Microbalance Model for Quantitatively Probing the Deformation of Adsorbed Particles at Low Surface Coverage.

    Science.gov (United States)

    Gillissen, Jurriaan J J; Jackman, Joshua A; Tabaei, Seyed R; Yoon, Bo Kyeong; Cho, Nam-Joon

    2017-11-07

    Characterizing the deformation of nanoscale, soft-matter particulates at solid-liquid interfaces is a demanding task, and there are limited experimental options to perform quantitative measurements in a nonperturbative manner. Previous attempts, based on the quartz crystal microbalance (QCM) technique, focused on the high surface coverage regime and modeled the adsorbed particles as a homogeneous film, while not considering the coupling between particles and surrounding fluid and hence resulting in an underestimation of the known particle height. In this work, we develop a model for the hydrodynamic coupling between adsorbed particles and surrounding fluid in the limit of a low surface coverage, which can be used to extract shape information from QCM measurement data. We tackle this problem by using hydrodynamic simulations of an ellipsoidal particle on an oscillating surface. From the simulation results, we derived a phenomenological relation between the aspect ratio r of the absorbed particles and the slope and intercept of the line that fits instantaneous, overtone-dependent QCM data on (δ/a, -Δf/n) coordinates where δ is the viscous penetration depth, a is the particle radius, Δf is the QCM frequency shift, and n is the overtone number. The model was applied to QCM measurement data pertaining to the adsorption of 34 nm radius, fluid-phase and gel-phase liposomes onto a titanium oxide-coated surface. The osmotic pressure across the liposomal bilayer was varied to induce shape deformation. By combining these results with a membrane bending model, we determined the membrane bending energy for the gel-phase liposomes, and the results are consistent with literature values. In summary, a phenomenological model is presented and validated in order to show for the first time that QCM experiments can quantitatively measure the deformation of adsorbed particles at low surface coverage.

  4. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong

    2013-01-01

    Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  5. Modeling of laser-driven hydrodynamics experiments

    Science.gov (United States)

    di Stefano, Carlos; Doss, Forrest; Rasmus, Alex; Flippo, Kirk; Desjardins, Tiffany; Merritt, Elizabeth; Kline, John; Hager, Jon; Bradley, Paul

    2017-10-01

    Correct interpretation of hydrodynamics experiments driven by a laser-produced shock depends strongly on an understanding of the time-dependent effect of the irradiation conditions on the flow. In this talk, we discuss the modeling of such experiments using the RAGE radiation-hydrodynamics code. The focus is an instability experiment consisting of a period of relatively-steady shock conditions in which the Richtmyer-Meshkov process dominates, followed by a period of decaying flow conditions, in which the dominant growth process changes to Rayleigh-Taylor instability. The use of a laser model is essential for capturing the transition. also University of Michigan.

  6. Ecology-centered experiences among children and adolescents: A qualitative and quantitative analysis

    Science.gov (United States)

    Orton, Judy

    living things) and environmental responsibility support (i.e., support through the availability of environmentally responsible models) predict EAB? As predicted, results showed that ecology-centered experiences predicted EAB; yet, when environmental responsibility support was taken into consideration, ecology-centered experiences no longer predicted EAB. These findings suggested environmental responsibility support was a stronger predictor than ecology-centered experiences. Finally, do age and gender predict EAB? Consistent with previous research (e.g., Alp, Ertepiner, Tekkaya, & Yilmaz, 2006), age and gender significantly predicted EAB.

  7. A Review of Quantitative Situation Assessment Models for Nuclear Power Plant Operators

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Situation assessment is the process of developing situation awareness and situation awareness is defined as 'the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.' Situation awareness is an important element influencing human actions because human decision making is based on the result of situation assessment or situation awareness. There are many models for situation awareness and those models can be categorized into qualitative or quantitative. As the effects of some input factors on situation awareness can be investigated through the quantitative models, the quantitative models are more useful for the design of operator interfaces, automation strategies, training program, and so on, than the qualitative models. This study presents the review of two quantitative models of situation assessment (SA) for nuclear power plant operators

  8. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  9. Assessment for Improvement: Two Models for Assessing a Large Quantitative Reasoning Requirement

    Directory of Open Access Journals (Sweden)

    Mary C. Wright

    2015-03-01

    Full Text Available We present two models for assessment of a large and diverse quantitative reasoning (QR requirement at the University of Michigan. These approaches address two key challenges in assessment: (1 dissemination of findings for curricular improvement and (2 resource constraints associated with measurement of large programs. Approaches we present for data collection include convergent validation of self-report surveys, as well as use of mixed methods and learning analytics. Strategies we present for dissemination of findings include meetings with instructors to share data and best practices, sharing of results through social media, and use of easily accessible dashboards. These assessment approaches may be of particular interest to universities with large numbers of students engaging in a QR experience, projects that involve multiple courses with diverse instructional goals, or those who wish to promote evidence-based curricular improvement.

  10. Seismic-refraction field experiments on Galapagos Islands: A quantitative tool for hydrogeology

    Science.gov (United States)

    Adelinet, M.; Domínguez, C.; Fortin, J.; Violette, S.

    2018-01-01

    Due to their complex structure and the difficulty of collecting data, the hydrogeology of basaltic islands remains misunderstood, and the Galapagos islands are not an exception. Geophysics allows the possibility to describe the subsurface of these islands and to quantify the hydrodynamical properties of its ground layers, which can be useful to build robust hydrogeological models. In this paper, we present seismic refraction data acquired on Santa Cruz and San Cristobal, the two main inhabited islands of Galapagos. We investigated sites with several hydrogeological contexts, located at different altitudes and at different distances to the coast. At each site, a 2D P-wave velocity profile is built, highlighting unsaturated and saturated volcanic layers. At the coastal sites, seawater intrusion is identified and basal aquifer is characterized in terms of variations in compressional sound wave velocities, according to saturation state. At highlands sites, the limits between soils and lava flows are identified. On San Cristobal Island, the 2D velocity profile obtained on a mid-slope site (altitude 150 m), indicates the presence of a near surface freshwater aquifer, which is in agreement with previous geophysical studies and the hydrogeological conceptual model developed for this island. The originality of our paper is the use of velocity data to compute field porosity based on poroelasticity theory and the Biot-Gassmann equations. Given that porosity is a key parameter in quantitative hydrogeological models, it is a step forward to a better understanding of shallow fluid flows within a complex structure, such as Galapagos volcanoes.

  11. Argonne Bubble Experiment Thermal Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  12. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Science.gov (United States)

    2011-05-18

    ... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... development of regulatory guidance for using risk information related to digital systems in the licensing...

  13. Dynamics of childhood growth and obesity development and validation of a quantitative mathematical model

    Science.gov (United States)

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...

  14. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-07-04

    Jul 4, 2008 ... computed. Linear regression models for the prediction of left ventricular structures were established. Prediction models for ... study aimed at establishing linear regression models that could be used in the prediction ..... Is white cat hypertension associated with artenal disease or left ventricular hypertrophy?

  15. Mode I Failure of Armor Ceramics: Experiments and Modeling

    Science.gov (United States)

    Meredith, Christopher; Leavy, Brian

    2017-06-01

    The pre-notched edge on impact (EOI) experiment is a technique for benchmarking the damage and fracture of ceramics subjected to projectile impact. A cylindrical projectile impacts the edge of a thin rectangular plate with a pre-notch on the opposite edge. Tension is generated at the notch tip resulting in the initiation and propagation of a mode I crack back toward the impact edge. The crack can be quantitatively measured using an optical method called Digital Gradient Sensing, which measures the crack-tip deformation by simultaneously quantifying two orthogonal surface slopes via measuring small deflections of light rays from a specularly reflective surface around the crack. The deflections in ceramics are small so the high speed camera needs to have a very high pixel count. This work reports on the results from pre-crack EOI experiments of SiC and B4 C plates. The experimental data are quantitatively compared to impact simulations using an advanced continuum damage model. The Kayenta ceramic model in Alegra will be used to compare fracture propagation speeds, bifurcations and inhomogeneous initiation of failure will be compared. This will provide insight into the driving mechanisms required for the macroscale failure modeling of ceramics.

  16. CFD and FEM modeling of PPOOLEX experiments

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-01-15

    Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)

  17. What should a quantitative model of masking look like and why would we want it?

    Science.gov (United States)

    Francis, Gregory

    2008-07-15

    Quantitative models of backward masking appeared almost as soon as computing technology was available to simulate them; and continued interest in masking has lead to the development of new models. Despite this long history, the impact of the models on the field has been limited because they have fundamental shortcomings. This paper discusses these shortcomings and outlines what future quantitative models should look like. It also discusses several issues about modeling and how a model could be used by researchers to better explore masking and other aspects of cognition.

  18. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments

    Science.gov (United States)

    Van Valen, David A.; Lane, Keara M.; Quach, Nicolas T.; Maayan, Inbal

    2016-01-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems. PMID:27814364

  19. Improving quantitative precipitation nowcasting with a local ensemble transform Kalman filter radar data assimilation system: observing system simulation experiments

    Directory of Open Access Journals (Sweden)

    Chih-Chien Tsai

    2014-03-01

    Full Text Available This study develops a Doppler radar data assimilation system, which couples the local ensemble transform Kalman filter with the Weather Research and Forecasting model. The benefits of this system to quantitative precipitation nowcasting (QPN are evaluated with observing system simulation experiments on Typhoon Morakot (2009, which brought record-breaking rainfall and extensive damage to central and southern Taiwan. The results indicate that the assimilation of radial velocity and reflectivity observations improves the three-dimensional winds and rain-mixing ratio most significantly because of the direct relations in the observation operator. The patterns of spiral rainbands become more consistent between different ensemble members after radar data assimilation. The rainfall intensity and distribution during the 6-hour deterministic nowcast are also improved, especially for the first 3 hours. The nowcasts with and without radar data assimilation have similar evolution trends driven by synoptic-scale conditions. Furthermore, we carry out a series of sensitivity experiments to develop proper assimilation strategies, in which a mixed localisation method is proposed for the first time and found to give further QPN improvement in this typhoon case.

  20. A quantitative risk-based model for reasoning over critical system properties

    Science.gov (United States)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  1. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  2. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  3. Quantitative skills as a graduate learning outcome of university science degree programmes: student performance explored through theplanned-enacted-experiencedcurriculum model

    Science.gov (United States)

    Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn

    2016-07-01

    Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.

  4. Linear approaches to intramolecular Förster resonance energy transfer probe measurements for quantitative modeling.

    Directory of Open Access Journals (Sweden)

    Marc R Birtwistle

    Full Text Available Numerous unimolecular, genetically-encoded Förster Resonance Energy Transfer (FRET probes for monitoring biochemical activities in live cells have been developed over the past decade. As these probes allow for collection of high frequency, spatially resolved data on signaling events in live cells and tissues, they are an attractive technology for obtaining data to develop quantitative, mathematical models of spatiotemporal signaling dynamics. However, to be useful for such purposes the observed FRET from such probes should be related to a biological quantity of interest through a defined mathematical relationship, which is straightforward when this relationship is linear, and can be difficult otherwise. First, we show that only in rare circumstances is the observed FRET linearly proportional to a biochemical activity. Therefore in most cases FRET measurements should only be compared either to explicitly modeled probes or to concentrations of products of the biochemical activity, but not to activities themselves. Importantly, we find that FRET measured by standard intensity-based, ratiometric methods is inherently non-linear with respect to the fraction of probes undergoing FRET. Alternatively, we find that quantifying FRET either via (1 fluorescence lifetime imaging (FLIM or (2 ratiometric methods where the donor emission intensity is divided by the directly-excited acceptor emission intensity (denoted R(alt is linear with respect to the fraction of probes undergoing FRET. This linearity property allows one to calculate the fraction of active probes based on the FRET measurement. Thus, our results suggest that either FLIM or ratiometric methods based on R(alt are the preferred techniques for obtaining quantitative data from FRET probe experiments for mathematical modeling purposes.

  5. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  6. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    used to simulate large-scale atmospheric circu- lation patterns and for determining the effect of changes ... to simulate precipitation and snow cover over the. Himalaya. Though this model underestimated pre- ...... Wilks D and Wilby R 1999 The weather generation game: A review of stochastic weather models; Progr. Phys.

  7. A Quantitative Causal Model Theory of Conditional Reasoning

    Science.gov (United States)

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  8. Digital clocks: simple Boolean models can quantitatively describe circadian systems.

    Science.gov (United States)

    Akman, Ozgur E; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J; Ghazal, Peter

    2012-09-07

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day-night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we anticipate

  9. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    Science.gov (United States)

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  10. EFSA Panel on Biological Hazards (BIOHAZ); Scientific Opinion on Reflecting on the experiences and lessons learnt from modelling on biological hazards

    DEFF Research Database (Denmark)

    Hald, Tine

    Quantitative analysis of scientific evidence involves the collection of data and modelling of a situation or process under consideration and this protocol is the basis of quantitative microbial risk assessments (QMRA). The lessons and experiences from quantitative risk assessments and modelling u...

  11. Salicylate Detection by Complexation with Iron(III) and Optical Absorbance Spectroscopy: An Undergraduate Quantitative Analysis Experiment

    Science.gov (United States)

    Mitchell-Koch, Jeremy T.; Reid, Kendra R.; Meyerhoff, Mark E.

    2008-01-01

    An experiment for the undergraduate quantitative analysis laboratory involving applications of visible spectrophotometry is described. Salicylate, a component found in several medications, as well as the active by-product of aspirin decomposition, is quantified. The addition of excess iron(III) to a solution of salicylate generates a deeply…

  12. Hydrolysis Studies and Quantitative Determination of Aluminum Ions Using [superscript 27]Al NMR: An Undergraduate Analytical Chemistry Experiment

    Science.gov (United States)

    Curtin, Maria A.; Ingalls, Laura R.; Campbell, Andrew; James-Pederson, Magdalena

    2008-01-01

    This article describes a novel experiment focused on metal ion hydrolysis and the equilibria related to metal ions in aqueous systems. Using [superscript 27]Al NMR, the students become familiar with NMR spectroscopy as a quantitative analytical tool for the determination of aluminum by preparing a standard calibration curve using standard aluminum…

  13. Numerical modeling of shock-sensitivity experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, A.L.; Forest, C.A.; Kershner, J.D.; Mader, C.L.; Pimbley, G.H.

    1981-01-01

    The Forest Fire rate model of shock initiation of heterogeneous explosives has been used to study several experiments commonly performed to measure the sensitivity of explosives to shock and to study initiation by explosive-formed jets. The minimum priming charge test, the gap test, the shotgun test, sympathetic detonation, and jet initiation have been modeled numerically using the Forest Fire rate in the reactive hydrodynamic codes SIN and 2DE.

  14. Bicycle Rider Control: Observations, Modeling & Experiments

    OpenAIRE

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby develop well handling bicycles for specific user groups in a much shorter time span. The recent benchmarking of the Whipple bicycle model for the balance and steer of a bicycle is an opening enabling t...

  15. Modelling of isotope exchange experiments in JET

    International Nuclear Information System (INIS)

    Ehrenberg, J.

    1987-01-01

    Isotope exchange experiments from hydrogen to deuterium in JET are theoretically described by employing a simple global isotope exchange model. Experimental results for discharges with limiter temperature around 250 0 C can be approximated by this model if an additional slow diffusion process of hydrogen in the limiter bulk is assumed. In discharges where thermal desorption occurs due to higher limiter temperatures (> or approx. 1000 0 C) (post carbonisation discharges) the change over process seems to be predominantly governed by thermal processes. (orig.)

  16. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Science.gov (United States)

    2012-07-17

    ...] Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A... Influenza Disease Models to Quantitatively Evaluate the Benefits and Risks of Vaccines: A Technical Workshop... model to quantitatively estimate the benefits and risks of a hypothetical influenza vaccine, and to seek...

  17. Rock physics models for constraining quantitative interpretation of ultrasonic data for biofilm growth and development

    Science.gov (United States)

    Alhadhrami, Fathiya Mohammed

    This study examines the use of rock physics modeling for quantitative interpretation of seismic data in the context of microbial growth and biofilm formation in unconsolidated sediment. The impetus for this research comes from geophysical experiments by Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012). These studies observed that microbial growth has a small effect on P-wave velocities (VP) but a large effect on seismic amplitudes. Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012) speculated that the amplitude variations were due to a combination of rock mechanical changes from accumulation of microbial growth related features such as biofilms. A more definite conclusion can be drawn by developing rock physics models that connect rock properties to seismic amplitudes. The primary objective of this work is to provide an explanation for high amplitude attenuation due to biofilm growth. The results suggest that biofilm formation in the Davis et al. (2010) experiment exhibit two growth styles: a loadbearing style where biofilm behaves like an additional mineral grain and a non-loadbearing mode where the biofilm grows into the pore spaces. In the loadbearing mode, the biofilms contribute to the stiffness of the sediments. We refer to this style as "filler." In the non-loadbearing mode, the biofilms contribute only to change in density of sediments without affecting their strength. We refer to this style of microbial growth as "mushroom." Both growth styles appear to be changing permeability more than the moduli or the density. As the result, while the VP velocity remains relatively unchanged, the amplitudes can change significantly depending on biofilm saturation. Interpreting seismic data from biofilm growths in term of rock physics models provide a greater insight into the sediment-fluid interaction. The models in turn can be used to understand microbial enhanced oil recovery and in assisting in solving environmental issues such as creating bio

  18. Inference of quantitative models of bacterial promoters from time-series reporter gene data.

    Science.gov (United States)

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for

  19. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology

    Science.gov (United States)

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian

    2012-01-01

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼ 10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 106, of normal HSCs. Radiobiologic estimates favor values > 106 for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  20. Evaluating quantitative and qualitative models: An application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  1. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  2. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  3. Complementary social science? Quali-quantitative experiments in a Big Data world

    Directory of Open Access Journals (Sweden)

    Anders Blok

    2014-08-01

    Full Text Available The rise of Big Data in the social realm poses significant questions at the intersection of science, technology, and society, including in terms of how new large-scale social databases are currently changing the methods, epistemologies, and politics of social science. In this commentary, we address such epochal (“large-scale” questions by way of a (situated experiment: at the Danish Technical University in Copenhagen, an interdisciplinary group of computer scientists, physicists, economists, sociologists, and anthropologists (including the authors is setting up a large-scale data infrastructure, meant to continually record the digital traces of social relations among an entire freshman class of students ( N  > 1000. At the same time, fieldwork is carried out on friendship (and other relations amongst the same group of students. On this basis, the question we pose is the following: what kind of knowledge is obtained on this social micro-cosmos via the Big (computational, quantitative and Small (embodied, qualitative Data, respectively? How do the two relate? Invoking Bohr’s principle of complementarity as analogy, we hypothesize that social relations, as objects of knowledge, depend crucially on the type of measurement device deployed. At the same time, however, we also expect new interferences and polyphonies to arise at the intersection of Big and Small Data, provided that these are, so to speak, mixed with care. These questions, we stress, are important not only for the future of social science methods but also for the type of societal (self-knowledge that may be expected from new large-scale social databases.

  4. The place of quantitative energy models in a prospective approach

    International Nuclear Information System (INIS)

    Taverdet-Popiolek, N.

    2009-01-01

    Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)

  5. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model

    Directory of Open Access Journals (Sweden)

    Brent D. Winslow

    2017-04-01

    Full Text Available Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  6. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model.

    Science.gov (United States)

    Winslow, Brent D; Nguyen, Nam; Venta, Kimberly E

    2017-01-01

    Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  7. Bicycle Rider Control : Observations, Modeling & Experiments

    NARCIS (Netherlands)

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby

  8. Quantitative Results from Shockless Compression Experiments on Solids to Multi-Megabar Pressure

    Science.gov (United States)

    Davis, Jean-Paul; Brown, Justin; Knudson, Marcus; Lemke, Raymond

    2015-03-01

    Quasi-isentropic, shockless ramp-wave experiments promise accurate equation-of-state (EOS) data in the solid phase at relatively low temperatures and multi-megabar pressures. In this range of pressure, isothermal diamond-anvil techniques have limited pressure accuracy due to reliance on theoretical EOS of calibration standards, thus accurate quasi-isentropic compression data would help immensely in constraining EOS models. Multi-megabar shockless compression experiments using the Z Machine at Sandia as a magnetic drive with stripline targets continue to be performed on a number of solids. New developments will be presented in the design and analysis of these experiments, including topics such as 2-D and magneto-hydrodynamic (MHD) effects and the use of LiF windows. Results will be presented for tantalum and/or gold metals, with comparisons to independently developed EOS. * Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for

  10. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    Science.gov (United States)

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  11. Quantitative modeling of human performance in complex, dynamic systems

    National Research Council Canada - National Science Library

    Baron, Sheldon; Kruser, Dana S; Huey, Beverly Messick

    1990-01-01

    ... Sheldon Baron, Dana S. Kruser, and Beverly Messick Huey, editors Panel on Human Performance Modeling Committee on Human Factors Commission on Behavioral and Social Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1990 Copyrightoriginal retained, the be not from cannot book, paper original however, for version forma...

  12. A quantitative risk model for early lifecycle decision making

    Science.gov (United States)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  13. Quantitative Comparison Between Crowd Models for Evacuation Planning and Evaluation

    NARCIS (Netherlands)

    Viswanathan, V.; Lee, C.E.; Lees, M.H.; Cheong, S.A.; Sloot, P.M.A.

    2014-01-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we

  14. Quantitative properties of clustering within modern microscopic nuclear models

    International Nuclear Information System (INIS)

    Volya, A.; Tchuvil’sky, Yu. M.

    2016-01-01

    A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially the possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.

  15. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...... the diffusion of neutral and ionic molecules across biomembranes, protonation to mono- or bivalent ions, adsorption to lipids, and electrical attraction or repulsion. Based on simulation results, high and selective accumulation in lysosomes was found for weak mono- and bivalent bases with intermediate to high...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  16. Quantitative Risk Modeling of Fire on the International Space Station

    Science.gov (United States)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  17. Quantitative chemical-shift MR imaging cutoff value: Benign versus malignant vertebral compression – Initial experience

    Directory of Open Access Journals (Sweden)

    Dalia Z. Zidan

    2014-09-01

    Conclusion: Quantitative chemical shift MR imaging could be a valuable addition to standard MR imaging techniques and represent a rapid problem solving tool in differentiating benign from malignant vertebral compression, especially in patients with known primary malignancies.

  18. Clinical experience of rehabilitation therapists with chronic diseases: a quantitative approach.

    NARCIS (Netherlands)

    Rijken, P.M.; Dekker, J.

    1998-01-01

    Objectives: To provide an overview of the numbers of patients with selected chronic diseases treated by rehabilitation therapists (physical therapists, occupational therapists, exercise therapists and podiatrists). The study was performed to get quantitative information on the degree to which

  19. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  20. Refining Grasp Affordance Models by Experience

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Buch, Anders Glent

    2010-01-01

    grasps. These affordances are represented probabilistically with grasp densities, which correspond to continuous density functions defined on the space of 6D gripper poses. A grasp density characterizes an object’s grasp affordance; densities are linked to visual stimuli through registration...... with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle...

  1. Interdiffusion of the aluminum magnesium system. Quantitative analysis and numerical model; Interdiffusion des Aluminium-Magnesium-Systems. Quantitative Analyse und numerische Modellierung

    Energy Technology Data Exchange (ETDEWEB)

    Seperant, Florian

    2012-03-21

    Aluminum coatings are a promising approach to protect magnesium alloys against corrosion and thereby making them accessible to a variety of technical applications. Thermal treatment enhances the adhesion of the aluminium coating on magnesium by interdiffusion. For a deeper understanding of the diffusion process at the interface, a quantitative description of the Al-Mg system is necessary. On the basis of diffusion experiments with infinite reservoirs of aluminum and magnesium, the interdiffusion coefficients of the intermetallic phases of the Al-Mg-system are calculated with the Sauer-Freise method for the first time. To solve contradictions in the literature concerning the intrinsic diffusion coefficients, the possibility of a bifurcation of the Kirkendall plane is considered. Furthermore, a physico-chemical description of interdiffusion is provided to interpret the observed phase transitions. The developed numerical model is based on a temporally varied discretization of the space coordinate. It exhibits excellent quantitative agreement with the experimentally measured concentration profile. This confirms the validity of the obtained diffusion coefficients. Moreover, the Kirkendall shift in the Al-Mg system is simulated for the first time. Systems with thin aluminum coatings on magnesium also exhibit a good correlation between simulated and experimental concentration profiles. Thus, the diffusion coefficients are also valid for Al-coated systems. Hence, it is possible to derive parameters for a thermal treatment by simulation, resulting in an optimized modification of the magnesium surface for technical applications.

  2. How plants manage food reserves at night: quantitative models and open questions

    Directory of Open Access Journals (Sweden)

    Antonio eScialdone

    2015-03-01

    Full Text Available In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources.

  3. How plants manage food reserves at night: quantitative models and open questions.

    Science.gov (United States)

    Scialdone, Antonio; Howard, Martin

    2015-01-01

    In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well-established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources.

  4. Probabilistic Quantitative Precipitation Forecasting over East China using Bayesian Model Averaging

    Science.gov (United States)

    Yang, Ai; Yuan, Huiling

    2014-05-01

    The Bayesian model averaging (BMA) is a post-processing method that weights the predictive probability density functions (PDFs) of individual ensemble members. This study investigates the BMA method for calibrating quantitative precipitation forecasts (QPFs) from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database. The QPFs over East Asia during summer (June-August) 2008-2011 are generated from six operational ensemble prediction systems (EPSs), including ECMWF, UKMO, NCEP, CMC, JMA, CMA, and multi-center ensembles of their combinations. The satellite-based precipitation estimate product TRMM 3B42 V7 is used as the verification dataset. In the BMA post-processing for precipitation forecasts, the PDF matching method is first applied to bias-correct systematic errors in each forecast member, by adjusting PDFs of forecasts to match PDFs of observations. Next, a logistic regression and two-parameter gamma distribution are used to fit the probability of rainfall occurrence and precipitation distribution. Through these two steps, the BMA post-processing bias-corrects ensemble forecasts systematically. The 60-70% cumulative density function (CDF) predictions well estimate moderate precipitation compared to raw ensemble mean, while the 90% upper boundary of BMA CDF predictions can be set as a threshold of extreme precipitation alarm. In general, the BMA method is more capable of multi-center ensemble post-processing, which improves probabilistic QPFs (PQPFs) with better ensemble spread and reliability. KEYWORDS: Bayesian model averaging (BMA); post-processing; ensemble forecast; TIGGE

  5. Modeling variability in porescale multiphase flow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  6. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  7. A quantitative confidence signal detection model: 1. Fitting psychometric functions

    Science.gov (United States)

    Yi, Yongwoo

    2016-01-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777

  8. High-response piezoelectricity modeled quantitatively near a phase boundary

    Science.gov (United States)

    Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.

    2017-01-01

    Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.

  9. The erythrocyte sedimentation rates: some model experiments.

    Science.gov (United States)

    Cerny, L C; Cerny, E L; Granley, C R; Compolo, F; Vogels, M

    1988-01-01

    In order to obtain a better understanding of the erythrocyte sedimentation rate (ESR), several models are presented. The first directs attention to the importance of geometrical models to represent the structure of mixtures. Here it is our intention to understand the effect of the structure on the packing of red blood cells. In this part of the study, "Cheerios" (trademark General Mills) are used as a macroscopic model. It is interesting that a random sampling of "Cheerios" has the same volume distribution curve that is found for erythrocytes with a Coulter Sizing Apparatus. In order to examine the effect of rouleaux formation, the "Cheerios" are stacked one on top of another and then glued. Rouleaux of 2,3,4,5, 7 and 10 discs were used. In order to examine a more realistic biological model, the experiments of Dintenfass were used. These investigations were performed in a split-capillary photo viscometer using whole blood from patients with a variety of diseases. The novel part of this research is the fact that the work was performed at 1g and at near zero gravity in the space shuttle "Discovery." The size of the aggregates and/or rouleaux clearly showed a dependence upon the gravity of the experiment. The purpose of this model was to examine the condition of self-similarity and fractal behavior. Calculations are reported which clearly indicate that there is general agreement in the magnitude of the fractal dimension from the "Cheerios" model, the "Discovery" experiment with those determined with the automatic sedimentimeter. The final aspect of this work examines the surface texture of the sedimention tube. A series of tubes were designed with "roughened" interiors. A comparison of the sedimentation rates clearly indicates a more rapid settling in "roughened" tubes than in ones with a smooth interior surface.

  10. Quantitative analysis of crossflow model of the COBRA-IV.1 code

    International Nuclear Information System (INIS)

    Lira, C.A.B.O.

    1983-01-01

    Based on experimental data in a rod bundle test section, the crossflow model of the COBRA-IV.1 code was quantitatively analysed. The analysis showed that is possible to establish some operational conditions in which the results of the theoretical model are acceptable. (author) [pt

  11. Development of probabilistic models for quantitative pathway analysis of plant pests introduction for the EU territory

    NARCIS (Netherlands)

    Douma, J.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Roques, A.; Werf, van der W.

    2015-01-01

    The aim of this report is to provide EFSA with probabilistic models for quantitative pathway analysis of plant pest introduction for the EU territory through non-edible plant products or plants. We first provide a conceptualization of two types of pathway models. The individual based PM simulates an

  12. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  13. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    Science.gov (United States)

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34) of patients with peripheral-zone and whole-gland models, respectively, compared with ADC alone. Model-based CBS

  14. Can’t Count or Won’t Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2015-01-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through ‘doing’ quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a ‘magic bullet’ and that a wider programme of content and assessment diversification across the curriculum is preferential. PMID:27330225

  15. Can't Count or Won't Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment.

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2016-06-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through 'doing' quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a 'magic bullet' and that a wider programme of content and assessment diversification across the curriculum is preferential.

  16. Interpretation of Quantitative Structure-Activity Relationship Models: Past, Present, and Future.

    Science.gov (United States)

    Polishchuk, Pavel

    2017-11-27

    This paper is an overview of the most significant and impactful interpretation approaches of quantitative structure-activity relationship (QSAR) models, their development, and application. The evolution of the interpretation paradigm from "model → descriptors → (structure)" to "model → structure" is indicated. The latter makes all models interpretable regardless of machine learning methods or descriptors used for modeling. This opens wide prospects for application of corresponding interpretation approaches to retrieve structure-property relationships captured by any models. Issues of separate approaches are discussed as well as general issues and prospects of QSAR model interpretation.

  17. Debris Thermal Hydraulics Modeling of QUENCH Experiments

    International Nuclear Information System (INIS)

    Kisselev, Arcadi E.; Kobelev, Gennadii V.; Strizhov, Valerii F.; Vasiliev, Alexander D.

    2006-01-01

    Porous debris formation and behavior in QUENCH experiments (QUENCH-02, QUENCH-03) plays a considerable role and its adequate modeling is important for thermal analysis. This work is aimed to the development of a numerical module which is able to model thermal hydraulics and heat transfer phenomena occurring during the high-temperature stage of severe accident with the formation of debris region and molten pool. The original approach for debris evolution is developed from classical principles using a set of parameters including debris porosity; average particle diameter; temperatures and mass fractions of solid, liquid and gas phases; specific interface areas between different phases; effective thermal conductivity of each phase, including radiative heat conductivity; mass and energy fluxes through the interfaces. The debris model is based on the system of continuity, momentum and energy conservation equations, which consider the dynamics of volume-averaged velocities and temperatures of fluid, solid and gaseous phases of porous debris. The different mechanisms of debris formation are considered, including degradation of fuel rods according to temperature criteria, taking into consideration some correlations between rod layers thicknesses; degradation of rod layer structure due to thermal expansion of melted materials inside intact rod cladding; debris formation due to sharp temperature drop of previously melted material due to reflood; and transition to debris of material from elements lying above. The porous debris model was implemented to best estimate numerical code RATEG/SVECHA/HEFEST developed for modeling thermal hydraulics and severe accident phenomena in a reactor. The model is used for calculation of QUENCH experiments. The results obtained by the model are compared to experimental data concerning different aspects of thermal behavior: thermal hydraulics of porous debris, radiative heat transfer in a porous medium, the generalized melting and refreezing

  18. Utilization of high-accuracy FTICR-MS data in protein quantitation experiments

    Czech Academy of Sciences Publication Activity Database

    Strohalm, Martin; Novák, Petr; Pompach, Petr; Man, Petr; Kavan, Daniel; Witt, M.; Džubák, P.; Hajdúch, M.; Havlíček, Vladimír

    2009-01-01

    Roč. 44, č. 11 (2009), s. 1565-1570 ISSN 1076-5174 R&D Projects: GA MŠk LC07017 Institutional research plan: CEZ:AV0Z50200510 Keywords : mass spectrometry * protein quantitation * workflow Subject RIV: EE - Microbiology, Virology Impact factor: 3.411, year: 2009

  19. Opinion Formation by Social Influence: From Experiments to Modeling.

    Science.gov (United States)

    Chacoma, Andrés; Zanette, Damián H

    2015-01-01

    Predicting different forms of collective behavior in human populations, as the outcome of individual attitudes and their mutual influence, is a question of major interest in social sciences. In particular, processes of opinion formation have been theoretically modeled on the basis of a formal similarity with the dynamics of certain physical systems, giving rise to an extensive collection of mathematical models amenable to numerical simulation or even to exact solution. Empirical ground for these models is however largely missing, which confine them to the level of mere metaphors of the real phenomena they aim at explaining. In this paper we present results of an experiment which quantifies the change in the opinions given by a subject on a set of specific matters under the influence of others. The setup is a variant of a recently proposed experiment, where the subject's confidence on his or her opinion was evaluated as well. In our realization, which records the quantitative answers of 85 subjects to 20 questions before and after an influence event, the focus is put on characterizing the change in answers and confidence induced by such influence. Similarities and differences with the previous version of the experiment are highlighted. We find that confidence changes are to a large extent independent of any other recorded quantity, while opinion changes are strongly modulated by the original confidence. On the other hand, opinion changes are not influenced by the initial difference with the reference opinion. The typical time scales on which opinion varies are moreover substantially longer than those of confidence change. Experimental results are then used to estimate parameters for a dynamical agent-based model of opinion formation in a large population. In the context of the model, we study the convergence to full consensus and the effect of opinion leaders on the collective distribution of opinions.

  20. Argonne Bubble Experiment Thermal Model Development III

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-11

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development” and “Argonne Bubble Experiment Thermal Model Development II”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at beam power levels between 6 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was recorded. The previous report2 described the Monte-Carlo N-Particle (MCNP) calculations and Computational Fluid Dynamics (CFD) analysis performed on the as-built solution vessel geometry. The CFD simulations in the current analysis were performed using Ansys Fluent, Ver. 17.2. The same power profiles determined from MCNP calculations in earlier work were used for the 12 and 15 kW simulations. The primary goal of the current work is to calculate the temperature profiles for the 12 and 15 kW cases using reasonable estimates for the gas generation rate, based on images of the bubbles recorded during the irradiations. Temperature profiles resulting from the CFD calculations are compared to experimental measurements.

  1. Quantitative modeling of gene networks of biological systems using fuzzy Petri nets and fuzzy sets

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2018-01-01

    Full Text Available Quantitative demonstrating of organic frameworks has turned into an essential computational methodology in the configuration of novel and investigation of existing natural frameworks. Be that as it may, active information that portrays the framework's elements should be known keeping in mind the end goal to get pertinent results with the routine displaying strategies. This information is frequently robust or even difficult to get. Here, we exhibit a model of quantitative fuzzy rational demonstrating approach that can adapt to obscure motor information and hence deliver applicable results despite the fact that dynamic information is fragmented or just dubiously characterized. Besides, the methodology can be utilized as a part of the blend with the current cutting edge quantitative demonstrating strategies just in specific parts of the framework, i.e., where the data are absent. The contextual analysis of the methodology suggested in this paper is performed on the model of nine-quality genes. We propose a kind of FPN model in light of fuzzy sets to manage the quantitative modeling of biological systems. The tests of our model appear that the model is practical and entirely powerful for information impersonation and thinking of fuzzy expert frameworks.

  2. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    Science.gov (United States)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  3. A Quantitative bgl Operon Model for E. coli Requires BglF Conformational Change for Sugar Transport

    Science.gov (United States)

    Chopra, Paras; Bender, Andreas

    The bgl operon is responsible for the metabolism of β-glucoside sugars such as salicin or arbutin in E. coli. Its regulatory system involves both positive and negative feedback mechanisms and it can be assumed to be more complex than that of the more closely studied lac and trp operons. We have developed a quantitative model for the regulation of the bgl operon which is subject to in silico experiments investigating its behavior under different hypothetical conditions. Upon administration of 5mM salicin as an inducer our model shows 80-fold induction, which compares well with the 60-fold induction measured experimentally. Under practical conditions 5-10mM inducer are employed, which is in line with the minimum inducer concentration of 1mM required by our model. The necessity of BglF conformational change for sugar transport has been hypothesized previously, and in line with those hypotheses our model shows only minor induction if conformational change is not allowed. Overall, this first quantitative model for the bgl operon gives reasonable predictions that are close to experimental results (where measured). It will be further refined as values of the parameters are determined experimentally. The model was developed in Systems Biology Markup Language (SBML) and it is available from the authors and from the Biomodels repository [www.ebi.ac.uk/biomodels].

  4. The 'OMITRON' and 'MODEL OMITRON' proposed experiments

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-12-01

    In the present paper the main features of the OMITRON and MODEL OMITRON proposed high field tokamaks are illustrated. Of the two, OMITRON is an ambitious experiment, aimed at attaining plasma burning conditions. its key physics issues are discussed, and a comparison is carried out with corresponding physics features in ignition experiments such as IGNITOR and ITER. Chief asset and chief challenge - in both OMITRON and MODEL OMITRON is the conspicuous 20 Tesla toroidal field value on the plasma axis. The advanced features of engineering which consent such a reward in terms of toroidal magnet performance are discussed in convenient depth and detail. As for the small, propaedeutic device MODEL OMITRON among its goals one must rank the purpose of testing key engineering issues in vivo, which are vital for the larger and more expensive parent device. Besides that, however - as indicated by ad hoc performed scoping studies - the smaller machine is found capable also of a number of quite interesting physics investigations in its own right

  5. Implementation of a combined association-linkage model for quantitative traits in linear mixed model procedures of statistical packages

    NARCIS (Netherlands)

    Beem, A. Leo; Boomsma, Dorret I.

    2006-01-01

    A transmission disequilibrium test for quantitative traits which combines association and linkage analyses is currently available in several dedicated software packages. We describe how to implement such models in linear mixed model procedures that are available in widely used statistical packages

  6. Quantitative surface topography determination by Nomarski reflection microscopy. 2: Microscope modification, calibration, and planar sample experiments

    International Nuclear Information System (INIS)

    Hartman, J.S.; Gordon, R.L.; Lessor, D.L.

    1980-01-01

    The application of reflective Nomarski differential interference contrast microscopy for the determination of quantitative sample topography data is presented. The discussion includes a review of key theoretical results presented previously plus the experimental implementation of the concepts using a commercial Momarski microscope. The experimental work included the modification and characterization of a commercial microscope to allow its use for obtaining quantitative sample topography data. System usage for the measurement of slopes on flat planar samples is also discussed. The discussion has been designed to provide the theoretical basis, a physical insight, and a cookbook procedure for implementation to allow these results to be of value to both those interested in the microscope theory and its practical usage in the metallography laboratory

  7. A proposed experiment on ball lightning model

    Energy Technology Data Exchange (ETDEWEB)

    Ignatovich, Vladimir K., E-mail: v.ignatovi@gmail.com [Frank Laboratory for Neutron Physics, Joint Institute for Nuclear Research, Dubna 141980 (Russian Federation); Ignatovich, Filipp V. [1565 Jefferson Rd., 420, Rochester, NY 14623 (United States)

    2011-09-19

    Highlights: → We propose to put a glass sphere inside an excited gas. → Then to put a light ray inside the glass in a whispering gallery mode. → If the light is resonant to gas excitation, it will be amplified at every reflection. → In ms time the light in the glass will be amplified, and will melt the glass. → A liquid shell kept integer by electrostriction forces is the ball lightning model. -- Abstract: We propose an experiment for strong light amplification at multiple total reflections from active gaseous media.

  8. Experiments for foam model development and validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F. (Honeywell Federal Manufacturing and Technologies, Kansas City Plant, Kansas City, MO); Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  9. Experience with the CMS Event Data Model

    Energy Technology Data Exchange (ETDEWEB)

    Elmer, P.; /Princeton U.; Hegner, B.; /CERN; Sexton-Kennedy, L.; /Fermilab

    2009-06-01

    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.

  10. The quantitative and qulitative analysis of knowledge production in physical education: the northest brazilian experience

    Directory of Open Access Journals (Sweden)

    Silvio Sánchez Gamboa

    2014-07-01

    Full Text Available The controversy between analytic and interpretative epistemology is presenting into the research about knowledge production and these are expressed in the analysis of methods, which are used in scientific research. The thematic project: KNOWLEDGE PRODUCTION IN PHYSICAL EDUCATION entitled: postgraduate impact south and southeast of Brazil, in the formation and the production of magister and phd who work in high schools in the northeast region. It is financial by Amparo Foundation to the research of Sao Paulo state (Proc. 2012/50019-7 it is about the relation between quantitative and qualitative methods and underlined: 1 The construction of scientific field of physical education has theoretical and methodological basis on natural science (physic, biology, mechanic and human and social science (pedagogy, sociology, psychology but they confront ways of knowledge validity. The first look for preserving the objectivity, they use a mathematical and statistic language, the second one charged of subjectivity. They prefer the interpretation and the polysemy languages, 2 the analysis made with the production of 750 surveys which were prepared by teachers and doctors who work in 126 courses of physical education in nine states in the Northest of Brazil. The analysis shows that the epistemological dualism is overcoming by the checking of the object of knowledge phenomena reveal numerous determinations and dimensions among them as quantitative as qualitative which cannot be separate into the process of knowledge. 3 Over the base of this dialectic united of opposed. The search leaded the dilemma between quantitative and qualitative approach, the construction of quantitative and qualitative indication sign in order to characterize the theoretical and methodology trend. The use of Cenciometria tools (dates analysis in order to identify the evolution of theories, authors, schools of thought, the research webs. The use of epistemological categories let us to

  11. Implementation of the model project: Ghanaian experience

    International Nuclear Information System (INIS)

    Schandorf, C.; Darko, E.O.; Yeboah, J.; Asiamah, S.D.

    2003-01-01

    Upgrading of the legal infrastructure has been the most time consuming and frustrating part of the implementation of the Model project due to the unstable system of governance and rule of law coupled with the low priority given to legislation on technical areas such as safe applications of Nuclear Science and Technology in medicine, industry, research and teaching. Dwindling Governmental financial support militated against physical and human resource infrastructure development and operational effectiveness. The trend over the last five years has been to strengthen the revenue generation base of the Radiation Protection Institute through good management practices to ensure a cost effective use of the limited available resources for a self-reliant and sustainable radiation and waste safety programme. The Ghanaian experience regarding the positive and negative aspects of the implementation of the Model Project is highlighted. (author)

  12. Bucky gel actuator displacement: experiment and model

    International Nuclear Information System (INIS)

    Ghamsari, A K; Zegeye, E; Woldesenbet, E; Jin, Y

    2013-01-01

    Bucky gel actuator (BGA) is a dry electroactive nanocomposite which is driven with a few volts. BGA’s remarkable features make this tri-layered actuator a potential candidate for morphing applications. However, most of these applications would require a better understanding of the effective parameters that influence the BGA displacement. In this study, various sets of experiments were designed to investigate the effect of several parameters on the maximum lateral displacement of BGA. Two input parameters, voltage and frequency, and three material/design parameters, carbon nanotube type, thickness, and weight fraction of constituents were selected. A new thickness ratio term was also introduced to study the role of individual layers on BGA displacement. A model was established to predict BGA maximum displacement based on the effect of these parameters. This model showed good agreement with reported results from the literature. In addition, an important factor in the design of BGA-based devices, lifetime, was investigated. (paper)

  13. Forces between permanent magnets: experiments and model

    International Nuclear Information System (INIS)

    González, Manuel I

    2017-01-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r −4 at large distances, as expected. (paper)

  14. Forces between permanent magnets: experiments and model

    Science.gov (United States)

    González, Manuel I.

    2017-03-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.

  15. Modeling reproducibility of porescale multiphase flow experiments

    Science.gov (United States)

    Ling, B.; Tartakovsky, A. M.; Bao, J.; Oostrom, M.; Battiato, I.

    2017-12-01

    Multi-phase flow in porous media is widely encountered in geological systems. Understanding immiscible fluid displacement is crucial for processes including, but not limited to, CO2 sequestration, non-aqueous phase liquid contamination and oil recovery. Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  16. Quantitative modelling of interaction of propafenone with sodium channels in cardiac cells

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Šimurda, J.

    2004-01-01

    Roč. 42, č. 2 (2004), s. 151-157 ISSN 0140-0118 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * sodium current block * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 1.070, year: 2004

  17. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases

    NARCIS (Netherlands)

    T.D. Hollingsworth (T. Déirdre); E.R. Adams (Emily R.); R.M. Anderson (Roy); K. Atkins (Katherine); S. Bartsch (Sarah); M-G. Basáñez (María-Gloria); M. Behrend (Matthew); D.J. Blok (David); L.A.C. Chapman (Lloyd A. C.); L.E. Coffeng (Luc); O. Courtenay (Orin); R.E. Crump (Ron E.); S.J. de Vlas (Sake); A.P. Dobson (Andrew); L. Dyson (Louise); H. Farkas (Hajnal); A.P. Galvani (Alison P.); M. Gambhir (Manoj); D. Gurarie (David); M.A. Irvine (Michael A.); S. Jervis (Sarah); M.J. Keeling (Matt J.); L. Kelly-Hope (Louise); C. King (Charles); B.Y. Lee (Bruce Y.); E.A. le Rutte (Epke); T.M. Lietman (Thomas M.); M. Ndeffo-Mbah (Martial); G.F. Medley (Graham F.); E. Michael (Edwin); A. Pandey (Abhishek); J.K. Peterson (Jennifer K.); A. Pinsent (Amy); T.C. Porco (Travis C.); J.H. Richardus (Jan Hendrik); L. Reimer (Lisa); K.S. Rock (Kat S.); B.K. Singh (Brajendra K.); W.A. Stolk (Wilma); S. Swaminathan (Subramanian); S.J. Torr (Steve J.); J. Townsend (Jeffrey); J. Truscott (James); M. Walker (Martin); A. Zoueva (Alexandra)

    2015-01-01

    textabstractQuantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an

  18. A quantitative model of the cardiac ventricular cell incorporating the transverse-axial tubular system

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Christé, G.; Šimurda, J.

    2003-01-01

    Roč. 22, č. 3 (2003), s. 355-368 ISSN 0231-5882 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * tubular system * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 0.794, year: 2003

  19. Modeling of Carbon Migration During JET Injection Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Strachan, J. D.; Likonen, J.; Coad, P.; Rubel, M.; Widdowson, A.; Airila, M.; Andrew, P.; Brezinsek, S.; Corrigan, G.; Esser, H. G.; Jachmich, S.; Kallenbach, A.; Kirschner, A.; Kreter, A.; Matthews, G. F.; Philipps, V.; Pitts, R. A.; Spence, J.; Stamp, M.; Wiesen, S.

    2008-10-15

    JET has performed two dedicated carbon migration experiments on the final run day of separate campaigns (2001 and 2004) using {sup 13}CH{sub 4} methane injected into repeated discharges. The EDGE2D/NIMBUS code modelled the carbon migration in both experiments. This paper describes this modelling and identifies a number of important migration pathways: (1) deposition and erosion near the injection location, (2) migration through the main chamber SOL, (3) migration through the private flux region aided by E x B drifts, and (4) neutral migration originating near the strike points. In H-Mode, type I ELMs are calculated to influence the migration by enhancing erosion during the ELM peak and increasing the long-range migration immediately following the ELM. The erosion/re-deposition cycle along the outer target leads to a multistep migration of {sup 13}C towards the separatrix which is called 'walking'. This walking created carbon neutrals at the outer strike point and led to {sup 13}C deposition in the private flux region. Although several migration pathways have been identified, quantitative analyses are hindered by experimental uncertainty in divertor leakage, and the lack of measurements at locations such as gaps and shadowed regions.

  20. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT.

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R; La Riviere, Patrick J; Alessio, Adam M

    2014-04-07

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)(-1), cardiac output = 3, 5, 8 L min(-1)). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  1. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  2. Ecology-Centered Experiences among Children and Adolescents: A Qualitative and Quantitative Analysis

    Science.gov (United States)

    Orton, Judy

    2013-01-01

    The present research involved two studies that considered "ecology-centered experiences" (i.e., experiences with living things) as a factor in children's environmental attitudes and behaviors and adolescents' ecological understanding. The first study (Study 1) examined how a community garden provides children in an urban setting the…

  3. Mathematical Model of Nicholson’s Experiment

    Directory of Open Access Journals (Sweden)

    Sergey D. Glyzin

    2017-01-01

    Full Text Available Considered  is a mathematical model of insects  population dynamics,  and  an attempt is made  to explain  classical experimental results  of Nicholson with  its help.  In the  first section  of the paper  Nicholson’s experiment is described  and dynamic  equations  for its modeling are chosen.  A priori estimates  for model parameters can be made more precise by means of local analysis  of the  dynamical system,  that is carried  out in the second section.  For parameter values found there  the stability loss of the  problem  equilibrium  of the  leads to the  bifurcation of a stable  two-dimensional torus.   Numerical simulations  based  on the  estimates  from the  second section  allows to explain  the  classical Nicholson’s experiment, whose detailed  theoretical substantiation is given in the last section.  There for an atrractor of the  system  the  largest  Lyapunov  exponent is computed. The  nature of this  exponent change allows to additionally narrow  the area of model parameters search.  Justification of this experiment was made possible  only  due  to  the  combination of analytical and  numerical  methods  in studying  equations  of insects  population dynamics.   At the  same time,  the  analytical approach made  it possible to perform numerical  analysis  in a rather narrow  region of the  parameter space.  It is not  possible to get into this area,  based only on general considerations.

  4. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    Science.gov (United States)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  5. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...... whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and behaviour converge seamlessly in a semantics based on DTMCs, thus enabling quantitative analyses ranging from the likelihood...... of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study....

  6. Development of life story experience (LSE) scales for migrant dentists in Australia: a sequential qualitative-quantitative study.

    Science.gov (United States)

    Balasubramanian, M; Spencer, A J; Short, S D; Watkins, K; Chrisopoulos, S; Brennan, D S

    2016-09-01

    The integration of qualitative and quantitative approaches introduces new avenues to bridge strengths, and address weaknesses of both methods. To develop measure(s) for migrant dentist experiences in Australia through a mixed methods approach. The sequential qualitative-quantitative design involved first the harvesting of data items from qualitative study, followed by a national survey of migrant dentists in Australia. Statements representing unique experiences in migrant dentists' life stories were deployed the survey questionnaire, using a five-point Likert scale. Factor analysis was used to examine component factors. Eighty-two statements from 51 participants were harvested from the qualitative analysis. A total of 1,022 of 1,977 migrant dentists (response rate 54.5%) returned completed questionnaires. Factor analysis supported an initial eight-factor solution; further scale development and reliability analysis led to five scales with a final list of 38 life story experience (LSE) items. Three scales were based on home country events: health system and general lifestyle concerns (LSE1; 10 items), society and culture (LSE4; 4 items) and career development (LSE5; 4 items). Two scales included migrant experiences in Australia: appreciation towards Australian way of life (LSE2; 13 items) and settlement concerns (LSE3; 7 items). The five life story experience scales provided necessary conceptual clarity and empirical grounding to explore migrant dentist experiences in Australia. Being based on original migrant dentist narrations, these scales have the potential to offer in-depth insights for policy makers and support future research on dentist migration.

  7. [Feasibility of the extended application of near infrared universal quantitative models].

    Science.gov (United States)

    Lei, De-Qing; Hu, Chang-Qin; Feng, Yan-Chun; Feng, Fang

    2010-11-01

    Construction of a successful near infrared analysis model is a complex task. It spends a lot of manpower and material resources, and is restricted by sample collection and model optimization. So it is important to study on the extended application of the existing near infrared (NIR) models. In this paper, cephradine capsules universal quantitative model was used as an example to study on the feasibility of its extended application. Slope/bias correction and piecewise direct standardization correction methods were used to make the universal model to fit to predict the intermediates in manufacturing processes of cephradine capsules, such as the content of powder blend or granules. The results showed that the corrected NIR universal quantitative model can be used for process control although the results of the model correction by slope/bias or piecewise direct standardization were not as good as that of model updating. And it also indicated that the model corrected by slope/bias is better than that by piecewise direct standardization. Model correction provided a new application for NIR universal models in process control.

  8. Determinations of Carbon Dioxide by Titration: New Experiments for General, Physical, and Quantitative Analysis Courses.

    Science.gov (United States)

    Crossno, S. K.; And Others

    1996-01-01

    Presents experiments involving the analysis of commercial products such as carbonated beverages and antacids that illustrate the principles of acid-base reactions and present interesting problems in stoichiometry for students. (JRH)

  9. Photogrammetry experiments with a model eye.

    Science.gov (United States)

    Rosenthal, A R; Falconer, D G; Pieper, I

    1980-01-01

    Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139

  10. Immunization experiments using the rodent caries model.

    Science.gov (United States)

    Smith, D J; Taubman, M A

    1976-04-01

    Taken together, the immunization experiments which have been performed in the rat caries model system appear to suggest a correlation between the presence of salivary antibody to S mutans and reductions in caries caused by these bacteria. However, the multifactorial nature of this disease does not permit at present the conclusion that the presence of this antibody is both necessary and sufficient to give rise to the demonstrated effects on pathogenesis. To clarify the role of salivary antibody, several refinements may be required in the current model. Immunization procedures that elicit only a local antibody response would both simplify interpretations of effects and would be more desirable for use as a vaccine. Such procedures might include intraductal installation of antigen in the parotid gland which has been demonstrated to result in this type of response. An additional refinement stems from the knowledge that the kinds of immunization procedures currently used stimulated both cellular immune and soluble antibody systems, potentially giving rise to a rather broad spectrum of immune responses. Therefore, it might be useful to study the effects on S mutans pathogenesis in rats in which certain of these responses have been repressed, for example, by thymectomy, antilymphocyte serum, and so on. Also, each of these approaches would be measurably enhanced by more sensitive techniques to monitor immunological events in the oral cavity. Refinements in the selection and use of relevant antigens of S mutans also are necessary to delineate the in vivo mechanism of immunological interference in the pathogenesis of cariogenic streptococci. Approaches involve the use of purified GTF antigens or cell surface antigens both in the investigation of these mechanisms in in vitro models using antibody specifically directed to these antigens and in rat immunization experiments using immunogenic preparations of these materials. In addition, alterations in the diet and challenge

  11. Development of quantitative atomic modeling for tungsten transport study Using LHD plasma with tungsten pellet injection

    International Nuclear Information System (INIS)

    Murakami, I.; Sakaue, H.A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2014-10-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from currentless plasmas of the Large Helical Device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) lines of W 24+ to W 33+ ions are very sensitive to electron temperature (Te) and useful to examine the tungsten behavior in edge plasmas. Based on the first quantitative analysis of measured spatial profile of W 44+ ion, the tungsten concentration is determined to be n(W 44+ )/n e = 1.4x10 -4 and the total radiation loss is estimated as ∼4 MW, of which the value is roughly half the total NBI power. (author)

  12. Group Active Engagements Using Quantitative Modeling of Physiology Concepts in Large-Enrollment Biology Classes

    Directory of Open Access Journals (Sweden)

    Karen L. Carleton

    2016-12-01

    Full Text Available Organismal Biology is the third introductory biology course taught at the University of Maryland. Students learn about the geometric, physical, chemical, and thermodynamic constraints that are common to all life, and their implications for the evolution of multicellular organisms based on a common genetic “toolbox.”  An additional goal is helping students to improve their scientific logic and comfort with quantitative modeling.  We recently developed group active engagement exercises (GAEs for this Organismal Biology class.  Currently, our class is built around twelve GAE activities implemented in an auditorium lecture hall in a large enrollment class.  The GAEs examine scientific concepts using a variety of models including physical models, qualitative models, and Excel-based quantitative models. Three quantitative GAEs give students an opportunity to build their understanding of key physiological ideas. 1 The Escape from Planet Ranvier exercise reinforces student understanding that membrane permeability means that ions move through open channels in the membrane.  2 The Stressing and Straining exercise requires students to quantify the elastic modulus from data gathered either in class or from scientific literature. 3 In Leveraging Your Options exercise, students learn about lever systems and apply this knowledge to biological systems.

  13. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  14. Quantitative Velocity Field Measurements in Reduced-Gravity Combustion Science and Fluid Physics Experiments

    Science.gov (United States)

    Greenberg, Paul S.; Wernet, Mark P.

    1999-01-01

    Systems have been developed and demonstrated for performing quantitative velocity measurements in reduced gravity combustion science and fluid physics investigations. The unique constraints and operational environments inherent to reduced-gravity experimental facilities pose special challenges to the development of hardware and software systems. Both point and planar velocimetric capabilities are described, with particular attention being given to the development of systems to support the International Space Station laboratory. Emphasis has been placed on optical methods, primarily arising from the sensitivity of the phenomena of interest to intrusive probes. Limitations on available power, volume, data storage, and attendant expertise have motivated the use of solid-state sources and detectors, as well as efficient analysis capabilities emphasizing interactive data display and parameter control.

  15. Delirium superimposed on dementia: A quantitative and qualitative evaluation of informal caregivers and health care staff experience.

    Science.gov (United States)

    Morandi, Alessandro; Lucchi, Elena; Turco, Renato; Morghen, Sara; Guerini, Fabio; Santi, Rossana; Gentile, Simona; Meagher, David; Voyer, Philippe; Fick, Donna M; Schmitt, Eva M; Inouye, Sharon K; Trabucchi, Marco; Bellelli, Giuseppe

    2015-10-01

    Delirium superimposed on dementia is common and potentially distressing for patients, caregivers, and health care staff. We quantitatively and qualitatively assessed the experience of informal caregiver and staff (staff nurses, nurse aides, physical therapists) caring for patients with delirium superimposed on dementia. Caregivers' and staff experience was evaluated three days after delirium superimposed on dementia resolution (T0) with a standardized questionnaire (quantitative interview) and open-ended questions (qualitative interview); caregivers were also evaluated at 1-month follow-up (T1). A total of 74 subjects were included; 33 caregivers and 41 health care staff (8 staff nurses, 20 physical therapists, 13 staff nurse aides/health care assistants). Overall, at both T0 and T1, the distress level was moderate among caregivers and mild among health care staff. Caregivers reported, at both T0 and T1, higher distress related to deficits of sustained attention and orientation, hypokinesia/psychomotor retardation, incoherence and delusions. The distress of health care staff related to each specific item of the Delirium-O-Meter was relatively low except for the physical therapists who reported higher level of distress on deficits of sustained/shifting attention and orientation, apathy, hypokinesia/psychomotor retardation, incoherence, delusion, hallucinations, and anxiety/fear. The qualitative evaluation identified important categories of caregivers' and staff feelings related to the delirium experience. This study provides information on the implication of the experience of delirium on caregivers and staff. The distress related to delirium superimposed on dementia underlines the importance of providing continuous training, support and experience for both the caregivers and health care staff to improve the care of patients with delirium superimposed on dementia. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Delirium superimposed on dementia: a quantitative and qualitative evaluation of informal caregivers and health care staff experience

    Science.gov (United States)

    Morandi, Alessandro; Lucchi, Elena; Turco, Renato; Morghen, Sara; Guerini, Fabio; Santi, Rossana; Gentile, Simona; Meagher, David; Voyer, Philippe; Fick, Donna M.; Schmitt, Eva M.; Inouye, Sharon K.; Trabucchi, Marco; Bellelli, Giuseppe

    2015-01-01

    Objective Delirium superimposed on dementia (DSD) is common and potentially distressing for patients, caregivers, and health care staff. We quantitatively and qualitatively assessed the experience of informal caregiver and staff (staff nurses, nurse aides, physical therapists) caring for patients with DSD. Methods Caregivers’ and staff experience was evaluated three days after DSD resolution (T0) with a standardized questionnaire (quantitative interview) and open-ended questions (qualitative interview); caregivers were also evaluated at 1-month follow-up (T1). Results A total of 74 subjects were included; 33 caregivers and 41 health care staff (8 staff nurses, 20 physical therapists, 13 staff nurse aides/health care assistants). Overall, at both T0 and T1, the distress level was moderate among caregivers and mild among health care staff. Caregivers reported, at both T0 and T1, higher distress related to deficits of sustained attention and orientation, hypokinesia/psychomotor retardation, incoherence and delusions. The distress of health care staff related to each specific item of the Delirium-O-Meter was relatively low except for the physical therapists who reported higher level of distress on deficits of sustained/shifting attention and orientation, apathy, hypokinesia/psychomotor retardation, incoherence, delusion, hallucinations, anxiety/fear. The qualitative evaluation identified important categories of caregivers ‘and staff feelings related to the delirium experience. Conclusions This study provides information on the implication of the experience of delirium on caregivers and staff. The distress related to DSD underlines the importance of providing continuous training, support and experience for both the caregivers and health care staff to improve the care of patients with delirium superimposed on dementia. PMID:26286892

  17. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    Science.gov (United States)

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers

  18. Quantitative Analysis of Intra Urban Growth Modeling using socio economic agents by combining cellular automata model with agent based model

    Science.gov (United States)

    Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.

    2017-12-01

    Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes

  19. ITER transient consequences for material damage: modelling versus experiments

    Science.gov (United States)

    Bazylev, B.; Janeschitz, G.; Landman, I.; Pestchanyi, S.; Loarte, A.; Federici, G.; Merola, M.; Linke, J.; Zhitlukhin, A.; Podkovyrov, V.; Klimov, N.; Safronov, V.

    2007-03-01

    Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed.

  20. ITER transient consequences for material damage: modelling versus experiments

    International Nuclear Information System (INIS)

    Bazylev, B; Janeschitz, G; Landman, I; Pestchanyi, S; Loarte, A; Federici, G; Merola, M; Linke, J; Zhitlukhin, A; Podkovyrov, V; Klimov, N; Safronov, V

    2007-01-01

    Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed

  1. Quantitative Investigations of Biodiesel Fuel Using Infrared Spectroscopy: An Instrumental Analysis Experiment for Undergraduate Chemistry Students

    Science.gov (United States)

    Ault, Andrew P.; Pomeroy, Robert

    2012-01-01

    Biodiesel has gained attention in recent years as a renewable fuel source due to its reduced greenhouse gas and particulate emissions, and it can be produced within the United States. A laboratory experiment designed for students in an upper-division undergraduate laboratory is described to study biodiesel production and biodiesel mixing with…

  2. Quantitive evaluation of macromolecular crystallization experiments using 1,8-ANS fluorescence

    NARCIS (Netherlands)

    Watts, David; Müller-Dieckmann, Jochen; Tsakanova, Gohar; Lamzin, Victor S; Groves, Matthew R

    Modern X-ray structure analysis and advances in high-throughput robotics have allowed a significant increase in the number of conditions screened for a given sample volume. An efficient evaluation of the increased amount of crystallization trials in order to identify successful experiments is now

  3. Genesis of the professional-patient relationship in early practical experience: qualitative and quantitative study

    NARCIS (Netherlands)

    Scavenius, Michael; Schmidt, Sonja; Klazinga, Niek

    2006-01-01

    CONTEXT: As a rule, undergraduate medical students experience everyday work in health care as spectators. They are not allowed to participate in real-life interaction between professionals and patients. We report on an exception to this rule. OBJECTIVES: The aim of this study was to examine

  4. A Quantitative Analysis of the Work Experiences of Adults with Visual Impairments in Nigeria

    Science.gov (United States)

    Wolffe, Karen E.; Ajuwon, Paul M.; Kelly, Stacy M.

    2013-01-01

    Introduction: Worldwide, people with visual impairments often struggle to gain employment. This study attempts to closely evaluate the work experiences of employed individuals with visual impairments living in one of the world's most populous developing nations, Nigeria. Methods: The researchers developed a questionnaire that assessed personal and…

  5. The MIQE Guidelines: Minimum Information for Publication of Quantitative Real-Time PCR Experiments

    Czech Academy of Sciences Publication Activity Database

    Bustin, S.A.; Benes, V.; Garson, J.A.; Hellemans, J.; Huggett, J.; Kubista, Mikael; Mueller, R.; Nolan, T.; Pfaffl, M.V.; Shipley, G.L.; Vandesompele, J.; Wittver, C.T.

    2009-01-01

    Roč. 55, č. 4 (2009), s. 611-622 ISSN 0009-9147 R&D Projects: GA AV ČR IAA500520809 Institutional research plan: CEZ:AV0Z50520701 Keywords : qPCR * MIQE * publication of experiments data Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 6.263, year: 2009

  6. Gravimetric Analysis of Bismuth in Bismuth Subsalicylate Tablets: A Versatile Quantitative Experiment for Undergraduate Laboratories

    Science.gov (United States)

    Davis, Eric; Cheung, Ken; Pauls, Steve; Dick, Jonathan; Roth, Elijah; Zalewski, Nicole; Veldhuizen, Christopher; Coeler, Joel

    2015-01-01

    In this laboratory experiment, lower- and upper-division students dissolved bismuth subsalicylate tablets in acid and precipitated the resultant Bi[superscript 3+] in solution with sodium phosphate for a gravimetric determination of bismuth subsalicylate in the tablets. With a labeled concentration of 262 mg/tablet, the combined data from three…

  7. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    Science.gov (United States)

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  8. Study on quantitative reliability analysis by multilevel flow models for nuclear power plants

    International Nuclear Information System (INIS)

    Yang Ming; Zhang Zhijian

    2011-01-01

    Multilevel Flow Models (MFM) is a goal-oriented system modeling method. MFM explicitly describes how a system performs the required functions under stated conditions for a stated period of time. This paper presents a novel system reliability analysis method based on MFM (MRA). The proposed method allows describing the system knowledge at different levels of abstraction which makes the reliability model easy for understanding, establishing, modifying and extending. The success probabilities of all main goals and sub-goals can be available by only one-time quantitative analysis. The proposed method is suitable for the system analysis and scheme comparison for complex industrial systems such as nuclear power plants. (authors)

  9. A quantitative analysis of instabilities in the linear chiral sigma model

    International Nuclear Information System (INIS)

    Nemes, M.C.; Nielsen, M.; Oliveira, M.M. de; Providencia, J. da

    1990-08-01

    We present a method to construct a complete set of stationary states corresponding to small amplitude motion which naturally includes the continuum solution. The energy wheighted sum rule (EWSR) is shown to provide for a quantitative criterium on the importance of instabilities which is known to occur in nonasymptotically free theories. Out results for the linear σ model showed be valid for a large class of models. A unified description of baryon and meson properties in terms of the linear σ model is also given. (author)

  10. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    DEFF Research Database (Denmark)

    Han, Xue; Sandels, Claes; Zhu, Kun

    2013-01-01

    operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics......, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...

  11. QSAR DataBank repository: open and linked qualitative and quantitative structure-activity relationship models.

    Science.gov (United States)

    Ruusmann, V; Sild, S; Maran, U

    2015-01-01

    Structure-activity relationship models have been used to gain insight into chemical and physical processes in biomedicine, toxicology, biotechnology, etc. for almost a century. They have been recognized as valuable tools in decision support workflows for qualitative and quantitative predictions. The main obstacle preventing broader adoption of quantitative structure-activity relationships [(Q)SARs] is that published models are still relatively difficult to discover, retrieve and redeploy in a modern computer-oriented environment. This publication describes a digital repository that makes in silico (Q)SAR-type descriptive and predictive models archivable, citable and usable in a novel way for most common research and applied science purposes. The QSAR DataBank (QsarDB) repository aims to make the processes and outcomes of in silico modelling work transparent, reproducible and accessible. Briefly, the models are represented in the QsarDB data format and stored in a content-aware repository (a.k.a. smart repository). Content awareness has two dimensions. First, models are organized into collections and then into collection hierarchies based on their metadata. Second, the repository is not only an environment for browsing and downloading models (the QDB archive) but also offers integrated services, such as model analysis and visualization and prediction making. The QsarDB repository unlocks the potential of descriptive and predictive in silico (Q)SAR-type models by allowing new and different types of collaboration between model developers and model users. The key enabling factor is the representation of (Q)SAR models in the QsarDB data format, which makes it easy to preserve and share all relevant data, information and knowledge. Model developers can become more productive by effectively reusing prior art. Model users can make more confident decisions by relying on supporting information that is larger and more diverse than before. Furthermore, the smart repository

  12. Vulnerability of Russian regions to natural risk: experience of quantitative assessment

    Directory of Open Access Journals (Sweden)

    E. Petrova

    2006-01-01

    Full Text Available One of the important tracks leading to natural risk prevention, disaster mitigation or the reduction of losses due to natural hazards is the vulnerability assessment of an 'at-risk' region. The majority of researchers propose to assess vulnerability according to an expert evaluation of several qualitative characteristics, scoring each of them usually using three ratings: low, average, and high. Unlike these investigations, we attempted a quantitative vulnerability assessment using multidimensional statistical methods. Cluster analysis for all 89 Russian regions revealed five different types of region, which are characterized with a single (rarely two prevailing factor causing increase of vulnerability. These factors are: the sensitivity of the technosphere to unfavorable influences; a 'human factor'; a high volume of stored toxic waste that increases possibility of NDs with serious consequences; the low per capita GRP, which determine reduced prevention and protection costs; the heightened liability of regions to natural disasters that can be complicated due to unfavorable social processes. The proposed methods permitted us to find differences in prevailing risk factor (vulnerability factor for the region types that helps to show in which direction risk management should focus on.

  13. Refining the statistical model for quantitative immunostaining of surface-functionalized nanoparticles by AFM.

    Science.gov (United States)

    MacCuspie, Robert I; Gorka, Danielle E

    2013-10-01

    Recently, an atomic force microscopy (AFM)-based approach for quantifying the number of biological molecules conjugated to a nanoparticle surface at low number densities was reported. The number of target molecules conjugated to the analyte nanoparticle can be determined with single nanoparticle fidelity using antibody-mediated self-assembly to decorate the analyte nanoparticles with probe nanoparticles (i.e., quantitative immunostaining). This work refines the statistical models used to quantitatively interpret the observations when AFM is used to image the resulting structures. The refinements add terms to the previous statistical models to account for the physical sizes of the analyte nanoparticles, conjugated molecules, antibodies, and probe nanoparticles. Thus, a more physically realistic statistical computation can be implemented for a given sample of known qualitative composition, using the software scripts provided. Example AFM data sets, using horseradish peroxidase conjugated to gold nanoparticles, are presented to illustrate how to implement this method successfully.

  14. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    Science.gov (United States)

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  15. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  16. Infrequent near death experiences in severe brain injury survivors - A quantitative and qualitative study

    Directory of Open Access Journals (Sweden)

    Yongmei Hou

    2013-01-01

    Full Text Available Background: Near death experiences (NDE are receiving increasing attention by the scientific community because not only do they provide a glimpse of the complexity of the mind-brain interactions in ′near-death′ circumstances but also because they have significant and long lasting effects on various psychological aspects of the survivors. The over-all incidence-reports of NDEs in literature have varied widely from a modest Figure of 10% to around 35%, even up to an incredible Figure of 72% in persons who have faced close brush with death. Somewhat similar to this range of difference in incidences are the differences prevalent in the opinions that theorists and researchers harbor around the world for explaining this phenomena. None the less, objective evidences have supported physiological theories the most. A wide range of physiological processes have been targeted for explaining NDEs. These include cerebral anoxia, chemical alterations like hypercapnia, presence of endorphins, ketamine, and serotonin, or abnormal activity of the temporal lobe or the limbic system. In spite of the fact that the physiological theories of NDEs have revolved around the derangements in brain, no study till date has taken up the task of evaluating the experiences of near-death in patients where specific injury has been to brain. Most of them have evaluated NDEs in cardiac-arrest patients. Post-traumatic coma is one such state regarding which the literature seriously lacks any information related to NDEs. Patients recollecting any memory of their post-traumatic coma are valuable assets for NDE researchers and needs special attention. Materials and Methods: Our present study was aimed at collecting this valuable information from survivors of severe head injury after a prolonged coma. The study was conducted in the head injury department of Guangdong 999 Brain hospital, Guangzhou, China. Patients included in the study were the ones Recovered from the posttraumatic

  17. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  18. [Application of DOSC combined with SBC in batches transfer of NIR quantitative model].

    Science.gov (United States)

    Jia, Yi-Fei; Zhang, Ying-Ying; Xu, Bing; Wang, An-Dong; Zhan, Xue-Yan

    2017-06-01

    Near infrared model established under a certain condition can be applied to the new samples status, environmental conditions or instrument status through the model transfer. Spectral background correction and model update are two types of data process methods of NIR quantitative model transfer, and orthogonal signal regression (OSR) is a method based on spectra background correction, in which virtual standard spectra is used to fit a linear relation between master batches spectra and slave batches spectra, and map the slave batches spectra to the master batch spectra to realize the transfer of near infrared quantitative model. However, the above data processing method requires the represent activeness of the virtual standard spectra, otherwise the big error will occur in the process of regression. Therefore, direct orthogonal signal correction-slope and bias correction (DOSC-SBC) method was proposed in this paper to solve the problem of PLS model's failure to predict accurately the content of target components in the formula of different batches, analyze the difference between the spectra background of the samples from different sources and the prediction error of PLS models. DOSC method was used to eliminate the difference of spectral background unrelated to target value, and after being combined with SBC method, the system errors between the different batches of samples were corrected to make the NIR quantitative model transferred between different batches. After DOSC-SBC method was used in the preparation process of water extraction and ethanol precipitation of Lonicerae Japonicae Flos in this paper, the prediction error of new batches of samples was decreased to 7.30% from 32.3% and to 4.34% from 237%, with significantly improved prediction accuracy, so that the target component in the new batch samples can be quickly quantified. DOSC-SBC model transfer method has realized the transfer of NIR quantitative model between different batches, and this method does

  19. Characterization of Cavitation Effects in Therapeutic Ultrasound: Sonophoresis Experiments and Quantitative Emission Measurements

    Science.gov (United States)

    Rich, Kyle Thomas

    Fundamental to the use of ultrasound for therapeutic benefit is a comprehensive understanding and identification of the underlying mechanisms. Specifically, consequential bioeffects during therapeutic ultrasound commonly coincide with the onset of microbubble cavitation, especially for drug-delivery applications. Hence, there is a need for monitoring and characterization techniques that provide quantitative metrics for assessing cavitation activity during ultrasound exposure in order to monitor treatment progress, identify interactions of cavitation with tissue, and provide dosimetry metrics for avoiding potentially harmful exposures both for therapeutic and diagnostic purposes. The primary goal of the work presented in this dissertation was to characterize the role of cavitation during sonophoresis using quantitative and system-independent approaches. First, this goal was accomplished using traditional passive cavitation detection techniques to monitor cavitation emissions during in vitro intermediate- (IFS, insonation frequency f 0 = 0.1-1 MHz) and high-frequency sonophoresis (HFS, f0 >1 MHz) treatments of in vitro porcine skin samples in Chapter 2. The relative intensity of subharmonic acoustic emissions from stable cavitation occurring near the skin surface was measured using a single-element PCD and was shown to correspond with reductions in skin resistivity, a surrogate measure of permeability, for all sonophoresis treatments. However, the acoustic emissions measured during sonophoresis provided incommensurable quantities between the different treatment regimes due to unaccounted frequency-dependent variations in the sensitivity of the PCD and diffraction effects in the cavitation-radiated pressure field received by the PCD. Second, methods were developed and employed to characterize the wideband absolute receive sensitivity of single-element focused and unfocused receivers in Chapter 3. By employing these characterization techniques and by accounting for the

  20. Variable selection in near infrared spectroscopy for quantitative models of homologous analogs of cephalosporins

    Directory of Open Access Journals (Sweden)

    Yan-Chun Feng

    2014-07-01

    Full Text Available Two universal spectral ranges (4550–4100 cm-1 and 6190–5510 cm-1 for construction of quantitative models of homologous analogs of cephalosporins were proposed by evaluating the performance of five spectral ranges and their combinations, using three data sets of cephalosporins for injection, i.e., cefuroxime sodium, ceftriaxone sodium and cefoperazone sodium. Subsequently, the proposed ranges were validated by using eight calibration sets of other homologous analogs of cephalosporins for injection, namely cefmenoxime hydrochloride, ceftezole sodium, cefmetazole, cefoxitin sodium, cefotaxime sodium, cefradine, cephazolin sodium and ceftizoxime sodium. All the constructed quantitative models for the eight kinds of cephalosporins using these universal ranges could fulfill the requirements for quick quantification. After that, competitive adaptive reweighted sampling (CARS algorithm and infrared (IR–near infrared (NIR two-dimensional (2D correlation spectral analysis were used to determine the scientific basis of these two spectral ranges as the universal regions for the construction of quantitative models of cephalosporins. The CARS algorithm demonstrated that the ranges of 4550–4100 cm-1 and 6190–5510 cm-1 included some key wavenumbers which could be attributed to content changes of cephalosporins. The IR–NIR 2D spectral analysis showed that certain wavenumbers in these two regions have strong correlations to the structures of those cephalosporins that were easy to degrade.

  1. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  2. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    Science.gov (United States)

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  3. Modeling thermal dilepton radiation for SIS experiments

    Energy Technology Data Exchange (ETDEWEB)

    Seck, Florian [TU Darmstadt (Germany); Collaboration: HADES-Collaboration

    2016-07-01

    Dileptons are radiated during the whole time evolution of a heavy-ion collision and leave the interaction zone unaffected. Thus they carry valuable information about the hot and dense medium created in those collisions to the detector. Realistic dilepton emission rates and an accurate description of the fireball's space-time evolution are needed to properly describe the contribution of in-medium signals to the dilepton invariant mass spectrum. In this presentation we demonstrate how this can be achieved at SIS collision energies. The framework is implemented into the event generator Pluto which is used by the HADES and CBM experiments to produce their hadronic freeze-out cocktails. With the help of an coarse-graining approach to model the fireball evolution and pertinent dilepton rates via a parametrization of the Rapp-Wambach in-medium ρ meson spectral function, the thermal contribution to the spectrum can be calculated. The results also enable us to get an estimate of the fireball lifetime at SIS18 energies.

  4. Systematic Analysis of Hollow Fiber Model of Tuberculosis Experiments.

    Science.gov (United States)

    Pasipanodya, Jotam G; Nuermberger, Eric; Romero, Klaus; Hanna, Debra; Gumbo, Tawanda

    2015-08-15

    The in vitro hollow fiber system model of tuberculosis (HFS-TB), in tandem with Monte Carlo experiments, was introduced more than a decade ago. Since then, it has been used to perform a large number of tuberculosis pharmacokinetics/pharmacodynamics (PK/PD) studies that have not been subjected to systematic analysis. We performed a literature search to identify all HFS-TB experiments published between 1 January 2000 and 31 December 2012. There was no exclusion of articles by language. Bias minimization was according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Steps for reporting systematic reviews were followed. There were 22 HFS-TB studies published, of which 12 were combination therapy studies and 10 were monotherapy studies. There were 4 stand-alone Monte Carlo experiments that utilized quantitative output from the HFS-TB. All experiments reported drug pharmacokinetics, which recapitulated those encountered in humans. HFS-TB studies included log-phase growth studies under ambient air, semidormant bacteria at pH 5.8, and nonreplicating persisters at low oxygen tension of ≤ 10 parts per billion. The studies identified antibiotic exposures associated with optimal kill of Mycobacterium tuberculosis and suppression of acquired drug resistance (ADR) and informed predictions about optimal clinical doses, expected performance of standard doses and regimens in patients, and expected rates of ADR, as well as a proposal of new susceptibility breakpoints. The HFS-TB model offers the ability to perform PK/PD studies including humanlike drug exposures, to identify bactericidal and sterilizing effect rates, and to identify exposures associated with suppression of drug resistance. Because of the ability to perform repetitive sampling from the same unit over time, the HFS-TB vastly improves statistical power and facilitates the execution of time-to-event analyses and repeated event analyses, as well as dynamic system pharmacology mathematical

  5. Toward a quantitative understanding of the Wnt/ β -catenin pathway through simulation and experiment

    KAUST Repository

    Lloyd-Lewis, Bethan

    2013-03-29

    Wnt signaling regulates cell survival, proliferation, and differentiation throughout development and is aberrantly regulated in cancer. The pathway is activated when Wnt ligands bind to specific receptors on the cell surface, resulting in the stabilization and nuclear accumulation of the transcriptional co-activator β-catenin. Mathematical and computational models have been used to study the spatial and temporal regulation of the Wnt/β-catenin pathway and to investigate the functional impact of mutations in key components. Such models range in complexity, from time-dependent, ordinary differential equations that describe the biochemical interactions between key pathway components within a single cell, to complex, multiscale models that incorporate the role of the Wnt/β-catenin pathway target genes in tissue homeostasis and carcinogenesis. This review aims to summarize recent progress in mathematical modeling of the Wnt pathway and to highlight new biological results that could form the basis for future theoretical investigations designed to increase the utility of theoretical models of Wnt signaling in the biomedical arena. © 2013 Wiley Periodicals, Inc.

  6. The Quantitative Evaluation of Functional Neuroimaging Experiments: Mutual Information Learning Curves

    DEFF Research Database (Denmark)

    Kjems, Ulrik; Hansen, Lars Kai; Anderson, Jon

    2002-01-01

    Learning curves are presented as an unbiased means for evaluating the performance of models for neuroimaging data analysis. The learning curve measures the predictive performance in terms of the generalization or prediction error as a function of the number of independent examples (e.g., subjects......\\$[/sup 15/ O]water data sets, although the framework is equally valid for multisubject fMRI studies. We demonstrate how the prediction error can be expressed as the mutual information between the scan and the scan label, measured in units of bits. The mutual information learning curve can be used...... to evaluate the impact of different methodological choices, e.g., classification label schemes, preprocessing choices. Another application for the learning curve is to examine the model performance using bias/variance considerations enabling the researcher to determine if the model performance is limited...

  7. Fixing the cracks in the crystal ball: A maturity model for quantitative risk assessment

    International Nuclear Information System (INIS)

    Rae, Andrew; Alexander, Rob; McDermid, John

    2014-01-01

    Quantitative risk assessment (QRA) is widely practiced in system safety, but there is insufficient evidence that QRA in general is fit for purpose. Defenders of QRA draw a distinction between poor or misused QRA and correct, appropriately used QRA, but this distinction is only useful if we have robust ways to identify the flaws in an individual QRA. In this paper we present a comprehensive maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature and in a collection of risk assessment peer reviews. We provide initial validation of the completeness and realism of the model. Our risk assessment maturity model provides a way to prioritise both process development within an organisation and empirical research within the QRA community. - Highlights: • Quantitative risk assessment (QRA) is widely practiced, but there is insufficient evidence that it is fit for purpose. • A given QRA may be good, or it may not – we need systematic ways to distinguish this. • We have created a maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature. • We have provided initial validation of the completeness and realism of the model. • The maturity model can also be used to prioritise QRA research discipline-wide

  8. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    Science.gov (United States)

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (pquantitative DCE-MRI, the quantitative DCE-MRI parameter K trans is a more sensitive and accurate index for detecting early reduced perfusion in osteoporotic bone.

  9. Atmospheric statistical dynamic models. Climate experiments: albedo experiments with a zonal atmospheric model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    The zonal model experiments with modified surface boundary conditions suggest an initial chain of feedback processes that is largest at the site of the perturbation: deforestation and/or desertification → increased surface albedo → reduced surface absorption of solar radiation → surface cooling and reduced evaporation → reduced convective activity → reduced precipitation and latent heat release → cooling of upper troposphere and increased tropospheric lapse rates → general global cooling and reduced precipitation. As indicated above, although the two experiments give similar overall global results, the location of the perturbation plays an important role in determining the response of the global circulation. These two-dimensional model results are also consistent with three-dimensional model experiments. These results have tempted us to consider the possibility that self-induced growth of the subtropical deserts could serve as a possible mechanism to cause the initial global cooling that then initiates a glacial advance thus activating the positive feedback loop involving ice-albedo feedback (also self-perpetuating). Reversal of the cycle sets in when the advancing ice cover forces the wave-cyclone tracks far enough equatorward to quench (revegetate) the subtropical deserts

  10. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    Science.gov (United States)

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.

  11. A quantitative assessment of the cultural knowledge, attitudes, and experiences of junior and senior dietetics students.

    Science.gov (United States)

    McArthur, Laura H; Greathouse, Karen R; Smith, Erskine R; Holbert, Donald

    2011-01-01

    To assess the cultural competence of dietetics majors. Self-administered questionnaire. Classrooms at 7 universities. Two hundred eighty-three students-98 juniors (34.6%) and 185 seniors (65.4%)-recruited during class time. Knowledge was measured using a multiple-choice test, attitudes were assessed using scales, and experiences were measured using a list of activities. Descriptive statistics were obtained on all variables. Correlation analyses identified associations between competencies. Statistical significance was P intercultural activities engaged in most often were eating ethnic food and watching films about other cultures, whereas those undertaken least often were completing a study abroad program or an internship abroad. These students would benefit from more interactive intercultural learning opportunities to enhance their knowledge base and communication skills. Copyright © 2011 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  12. Creating Research-Rich Learning Experiences and Quantitative Skills in a 1st Year Earth Systems Course

    Science.gov (United States)

    King, P. L.; Eggins, S.; Jones, S.

    2014-12-01

    We are creating a 1st year Earth Systems course at the Australian National University that is built around research-rich learning experiences and quantitative skills. The course has top students including ≤20% indigenous/foreign students; nonetheless, students' backgrounds in math and science vary considerably posing challenges for learning. We are addressing this issue and aiming to improve knowledge retention and deep learning by changing our teaching approach. In 2013-2014, we modified the weekly course structure to a 1hr lecture; a 2hr workshop with hands-on activities; a 2hr lab; an assessment piece covering all face-to-face activities; and a 1hr tutorial. Our new approach was aimed at: 1) building student confidence with data analysis and quantitative skills through increasingly difficult tasks in science, math, physics, chemistry, climate science and biology; 2) creating effective learning groups using name tags and a classroom with 8-person tiered tables; 3) requiring students to apply new knowledge to new situations in group activities, two 1-day field trips and assessment items; 4) using pre-lab and pre-workshop exercises to promote prior engagement with key concepts; 5) adding open-ended experiments to foster structured 'scientific play' or enquiry and creativity; and 6) aligning the assessment with the learning outcomes and ensuring that it contains authentic and challenging southern hemisphere problems. Students were asked to design their own ocean current experiment in the lab and we were astounded by their ingenuity: they simulated the ocean currents off Antarctica; varied water density to verify an equation; and examined the effect of wind and seafloor topography on currents. To evaluate changes in student learning, we conducted surveys in 2013 and 2014. In 2014, we found higher levels of student engagement with the course: >~80% attendance rates and >~70% satisfaction (20% neutral). The 2014 cohort felt that they were more competent in writing

  13. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  14. A Quantitative Risk Evaluation Model for Network Security Based on Body Temperature

    Directory of Open Access Journals (Sweden)

    Y. P. Jiang

    2016-01-01

    Full Text Available These days, in allusion to the traditional network security risk evaluation model, which have certain limitations for real-time, accuracy, characterization. This paper proposed a quantitative risk evaluation model for network security based on body temperature (QREM-BT, which refers to the mechanism of biological immune system and the imbalance of immune system which can result in body temperature changes, firstly, through the r-contiguous bits nonconstant matching rate algorithm to improve the detection quality of detector and reduce missing rate or false detection rate. Then the dynamic evolution process of the detector was described in detail. And the mechanism of increased antibody concentration, which is made up of activating mature detector and cloning memory detector, is mainly used to assess network risk caused by various species of attacks. Based on these reasons, this paper not only established the equation of antibody concentration increase factor but also put forward the antibody concentration quantitative calculation model. Finally, because the mechanism of antibody concentration change is reasonable and effective, which can effectively reflect the network risk, thus body temperature evaluation model was established in this paper. The simulation results showed that, according to body temperature value, the proposed model has more effective, real time to assess network security risk.

  15. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    Science.gov (United States)

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  16. A Structured Review of Quantitative Models of the Pharmaceutical Supply Chain

    Directory of Open Access Journals (Sweden)

    Carlos Franco

    2017-01-01

    Full Text Available The aim of this review is to identify and provide a structured overview of quantitative models in the pharmaceutical supply chain, a subject not exhaustively studied in the previous reviews on healthcare logistics related mostly to quantitative models in healthcare or logistics studies in hospitals. The models are classified into three categories of classification: network design, inventory models, and optimization of a pharmaceutical supply chain. A taxonomy for each category is shown describing the principal features of each echelon included in the review; this taxonomy allows the readers to identify easily a paper based on the actors of the pharmaceutical supply chain. The search process included research articles published in the databases between 1984 and November 2016. In total 46 studies were included. In the review process we found that in the three fields the most common source of uncertainty used is the demand in the 56% of the cases. Within the review process we can conclude that most of the articles in the literature are focused on the optimization of the pharmaceutical supply chain and inventory models but the field on supply chain network design is not deeply studied.

  17. Quantitative immunohistochemical method for detection of wheat protein in model sausage

    Directory of Open Access Journals (Sweden)

    Zuzana Řezáčová Lukášková

    2014-01-01

    Full Text Available Since gluten can induce coeliac symptoms in hypersensitive consumers with coeliac disease, it is necessary to label foodstuffs containing it. In order to label foodstuffs, it is essential to find reliable methods to accurately determine the amount of wheat protein in food. The objective of this study was to compare the quantitative detection of wheat protein in model sausages by ELISA and immunohistochemical methods. Immunohistochemistry was combined with stereology to achieve quantitative results. High correlation between addition of wheat protein and compared methods was confirmed. For ELISA method the determined values were r = 0.98, P P < 0.01. Although ELISA is an accredited method, it was not reliable, unlike immunohistochemical methods (stereology SD = 3.1.

  18. Quantitative experiments on thermal hydraulic characteristics of an annular tube with twisted fins

    International Nuclear Information System (INIS)

    Ezato, Koichiro; Dairaku, Masayuki; Taniguchi, Masaki; Sato, Kazuyoshi; Suzuki, Satoshi; Akiba, Masato

    2003-11-01

    Thermal hydraulic experiments measuring critical heat flux (CHF) and pressure drop of an annular tube with twisted fins, ''annular swirl tube'', has been performed to examine its applicability to the ITER divertor cooling structure. The annular swirl tube consists of two concentric circular tubes, the outer and inner tubes. The outer tube with outer and inner diameters (OD and ID) of 21 mm and 15 mm is made of Cu-alloy that is CuCrZr and oe of candidate materials of the ITER divertor cooling tube. The inner tube with OD of 11 mm and ID of 9 mm is made of stainless steal. It has an external swirl fin with twist ratio (y) of three to enhance its heat transfer performance. In this tube, cooling water flows inside of the inner tube first, and then returns into an annulus between the outer and inner tubes with a swirl flow at an end-return of the cooling tube. The CHF experiments show that no degradation of CHF of the annular swirl tube in comparison with the conventional swirl tube whose dimensions are similar to those of the outer tube of the annular swirl tube. A minimum axial velocity of 7.1 m/s is required to remove the incident heat flux of 28MW/m 2 , the ITER design value. Applicability of the JAERI's correlation for the heat transfer to the annular swirl tube is also demonstrated by comparing the experimental results with those of the numerical analysis. The friction factor correlation for the annular flow with the twisted fins is also proposed for the hydrodynamic design of the ITER vertical target. The least pressure drop at the end-return is obtained by using the hemispherical end-plug. Its radius is the same as that of ID of the outer cooling tube. These results show that thermal-hydraulic performance of the annular swirl tube is promising in application to the cooling structure for the ITER vertical target. (author)

  19. Quantitative and qualitative insights into the experiences of children with Rett syndrome and their families.

    Science.gov (United States)

    Downs, Jenny; Leonard, Helen

    2016-09-01

    Rett syndrome is a rare neurodevelopmental disorder caused by a mutation in the MECP2 gene. It is associated with severe functional impairments and medical comorbidities such as scoliosis and poor growth. The population-based and longitudinal Australian Rett Syndrome Database was established in 1993 and has supported investigations of the natural history of Rett syndrome and effectiveness of treatments, as well as a suite of qualitative studies to identify deeper meanings. This paper describes the early presentation of Rett syndrome, including regression and challenges for families seeking a diagnosis. We discuss the importance of implementing strategies to enhance daily communication and movement, describe difficulties interpreting the presence of pain and discomfort, and argue for a stronger evidence base in relation to management. Finally, we outline a framework for understanding quality of life in Rett syndrome and suggest areas of life to which we can direct efforts in order to improve quality of life. Each of these descriptions is illustrated with vignettes of child and family experiences. Clinicians and researchers must continue to build this framework of knowledge and understanding with efforts committed to providing more effective treatments and supporting the best quality of life for those affected.

  20. Bridging the qualitative-quantitative divide: Experiences from conducting a mixed methods evaluation in the RUCAS programme.

    Science.gov (United States)

    Makrakis, Vassilios; Kostoulas-Makrakis, Nelly

    2016-02-01

    Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Efficacy of the Frame and Hu mathematical model for the quantitative analysis of agents influencing growth of chick embryo fibroblasts

    International Nuclear Information System (INIS)

    Korohoda, K.; Czyz, J.

    1994-01-01

    The experiments on the effect of various sera and substratum surface area upon growth of chick embryo fibroblasts-like in secondary cultures are described and discussed on the grounds of a mathematical model for growth of anchorage-dependent cells proposed by Frame and Hu. The model and presented results demonstrate the mutual independence of the effects of agent influencing of rate of cell proliferation (i.e. accelerating or retarding growth) and the agents that modify the limitation of cell proliferation (i.e. maximum cell density at confluence). The model proposed by Frame and Hu due to its relative simplicity offers and easy mode of description and quantitative evaluation of experiments concerning cell growth regulation. It is shown that various sera added at constant concentration significantly modify the rate of cell proliferation with little effect upon the maximum cell density attainable. The cells grew much more slowly in the presence of calf serum than in the presence of chick serum and the addition of iron and zinc complexes to calf serum significantly accelerated cell growth. An increase in the substratum surface area by the addition of glass wool to culture vessels significantly increased cell density per constant volume of medium even when retardation of growth was observed. The results presented point to the need of direct cell counting for estimation of cell growth curves and discussion of effects of agents influencing parameters characterizing cell proliferation. (author). 34 refs, 5 figs, 2 tabs

  2. Quantitative assessment of manual and robotic microcannulation for eye surgery using new eye model.

    Science.gov (United States)

    Tanaka, Shinichi; Harada, Kanako; Ida, Yoshiki; Tomita, Kyohei; Kato, Ippei; Arai, Fumihito; Ueta, Takashi; Noda, Yasuo; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-06-01

    Microcannulation, a surgical procedure for the eye that requires drug injection into a 60-90 µm retinal vein, is difficult to perform manually. Robotic assistance has been proposed; however, its effectiveness in comparison to manual operation has not been quantified. An eye model has been developed to quantify the performance of manual and robotic microcannulation. The eye model, which is implemented with a force sensor and microchannels, also simulates the mechanical constraints of the instrument's movement. Ten subjects performed microcannulation using the model, with and without robotic assistance. The results showed that the robotic assistance was useful for motion stability when the drug was injected, whereas its positioning accuracy offered no advantage. An eye model was used to quantitatively assess the robotic microcannulation performance in comparison to manual operation. This approach could be valid for a better evaluation of surgical robotic assistance. Copyright © 2014 John Wiley & Sons, Ltd.

  3. SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.

    Science.gov (United States)

    Weight, Michael D; Harpending, Henry

    2017-01-01

    The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.

  4. Quantitative analysis of anaerobic oxidation of methane (AOM) in marine sediments: A modeling perspective

    Science.gov (United States)

    Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.

    2011-05-01

    Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.

  5. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  6. Universally applicable model for the quantitative determination of lake sediment composition using fourier transform infrared spectroscopy.

    Science.gov (United States)

    Rosén, Peter; Vogel, Hendrik; Cunningham, Laura; Hahn, Annette; Hausmann, Sonja; Pienitz, Reinhard; Zolitschka, Bernd; Wagner, Bernd; Persson, Per

    2011-10-15

    Fourier transform infrared spectroscopy (FTIRS) can provide detailed information on organic and minerogenic constituents of sediment records. Based on a large number of sediment samples of varying age (0-340,000 yrs) and from very diverse lake settings in Antarctica, Argentina, Canada, Macedonia/Albania, Siberia, and Sweden, we have developed universally applicable calibration models for the quantitative determination of biogenic silica (BSi; n = 816), total inorganic carbon (TIC; n = 879), and total organic carbon (TOC; n = 3164) using FTIRS. These models are based on the differential absorbance of infrared radiation at specific wavelengths with varying concentrations of individual parameters, due to molecular vibrations associated with each parameter. The calibration models have low prediction errors and the predicted values are highly correlated with conventionally measured values (R = 0.94-0.99). Robustness tests indicate the accuracy of the newly developed FTIRS calibration models is similar to that of conventional geochemical analyses. Consequently FTIRS offers a useful and rapid alternative to conventional analyses for the quantitative determination of BSi, TIC, and TOC. The rapidity, cost-effectiveness, and small sample size required enables FTIRS determination of geochemical properties to be undertaken at higher resolutions than would otherwise be possible with the same resource allocation, thus providing crucial sedimentological information for climatic and environmental reconstructions.

  7. Quantitative 3D strain analysis in analogue experiments simulating tectonic deformation: Integration of X-ray computed tomography and digital volume correlation techniques

    Science.gov (United States)

    Adam, J.; Klinkmüller, M.; Schreurs, G.; Wieneke, B.

    2013-10-01

    The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of "solid" analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern

  8. MSstats: an R package for statistical analysis of quantitative mass spectrometry-based proteomic experiments.

    Science.gov (United States)

    Choi, Meena; Chang, Ching-Yun; Clough, Timothy; Broudy, Daniel; Killeen, Trevor; MacLean, Brendan; Vitek, Olga

    2014-09-01

    MSstats is an R package for statistical relative quantification of proteins and peptides in mass spectrometry-based proteomics. Version 2.0 of MSstats supports label-free and label-based experimental workflows and data-dependent, targeted and data-independent spectral acquisition. It takes as input identified and quantified spectral peaks, and outputs a list of differentially abundant peptides or proteins, or summaries of peptide or protein relative abundance. MSstats relies on a flexible family of linear mixed models. The code, the documentation and example datasets are available open-source at www.msstats.org under the Artistic-2.0 license. The package can be downloaded from www.msstats.org or from Bioconductor www.bioconductor.org and used in an R command line workflow. The package can also be accessed as an external tool in Skyline (Broudy et al., 2014) and used via graphical user interface. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  10. Effects of Noninhibitory Serpin Maspin on the Actin Cytoskeleton: A Quantitative Image Modeling Approach.

    Science.gov (United States)

    Al-Mamun, Mohammed; Ravenhill, Lorna; Srisukkham, Worawut; Hossain, Alamgir; Fall, Charles; Ellis, Vincent; Bass, Rosemary

    2016-04-01

    Recent developments in quantitative image analysis allow us to interrogate confocal microscopy images to answer biological questions. Clumped and layered cell nuclei and cytoplasm in confocal images challenges the ability to identify subcellular compartments. To date, there is no perfect image analysis method to identify cytoskeletal changes in confocal images. Here, we present a multidisciplinary study where an image analysis model was developed to allow quantitative measurements of changes in the cytoskeleton of cells with different maspin exposure. Maspin, a noninhibitory serpin influences cell migration, adhesion, invasion, proliferation, and apoptosis in ways that are consistent with its identification as a tumor metastasis suppressor. Using different cell types, we tested the hypothesis that reduction in cell migration by maspin would be reflected in the architecture of the actin cytoskeleton. A hybrid marker-controlled watershed segmentation technique was used to segment the nuclei, cytoplasm, and ruffling regions before measuring cytoskeletal changes. This was informed by immunohistochemical staining of cells transfected stably or transiently with maspin proteins, or with added bioactive peptides or protein. Image analysis results showed that the effects of maspin were mirrored by effects on cell architecture, in a way that could be described quantitatively.

  11. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  12. Computational modeling in nanomedicine: prediction of multiple antibacterial profiles of nanoparticles using a quantitative structure-activity relationship perturbation model.

    Science.gov (United States)

    Speck-Planche, Alejandro; Kleandrova, Valeria V; Luan, Feng; Cordeiro, Maria Natália D S

    2015-01-01

    We introduce the first quantitative structure-activity relationship (QSAR) perturbation model for probing multiple antibacterial profiles of nanoparticles (NPs) under diverse experimental conditions. The dataset is based on 300 nanoparticles containing dissimilar chemical compositions, sizes, shapes and surface coatings. In general terms, the NPs were tested against different bacteria, by considering several measures of antibacterial activity and diverse assay times. The QSAR perturbation model was created from 69,231 nanoparticle-nanoparticle (NP-NP) pairs, which were randomly generated using a recently reported perturbation theory approach. The model displayed an accuracy rate of approximately 98% for classifying NPs as active or inactive, and a new copper-silver nanoalloy was correctly predicted by this model with consensus accuracy of 77.73%. Our QSAR perturbation model can be used as an efficacious tool for the virtual screening of antibacterial nanomaterials.

  13. Spatiotemporal microbiota dynamics from quantitative in vitro and in silico models of the gut

    Science.gov (United States)

    Hwa, Terence

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth behaviors, which ultimately dictate the gut microbiota composition. Combining measurements of bacterial growth physiology with analysis of published data on human physiology into a quantitative modeling framework, we show how hydrodynamic forces in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla in the gut. Our model quantitatively explains the observed variation of microbiota composition among healthy adults, and predicts colonic water absorption (manifested as stool consistency) and nutrient intake to be two key factors determining this composition. The model further reveals that both factors, which have been identified in recent correlative studies, exert their effects through the same mechanism: changes in colonic pH that differentially affect the growth of different bacteria. Our findings show that a predictive and mechanistic understanding of microbial ecology in the human gut is possible, and offer the hope for the rational design of intervention strategies to actively control the microbiota. This work is supported by the Bill and Melinda Gates Foundation.

  14. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.

  15. Quantitative Analysis of Situation Awareness (QASA): modelling and measuring situation awareness using signal detection theory.

    Science.gov (United States)

    Edgar, Graham K; Catherwood, Di; Baker, Steven; Sallis, Geoff; Bertels, Michael; Edgar, Helen E; Nikolla, Dritan; Buckle, Susanna; Goodwin, Charlotte; Whelan, Allana

    2017-12-29

    This paper presents a model of situation awareness (SA) that emphasises that SA is necessarily built using a subset of available information. A technique (Quantitative Analysis of Situation Awareness - QASA), based around signal detection theory, has been developed from this model that provides separate measures of actual SA (ASA) and perceived SA (PSA), together with a feature unique to QASA, a measure of bias (information acceptance). These measures allow the exploration of the relationship between actual SA, perceived SA and information acceptance. QASA can also be used for the measurement of dynamic ASA, PSA and bias. Example studies are presented and full details of the implementation of the QASA technique are provided. Practitioner Summary: This paper presents a new model of situation awareness (SA) together with an associated tool (Quantitative Analysis of Situation Awareness - QASA) that employs signal detection theory to measure several aspects of SA, including actual and perceived SA and information acceptance. Full details are given of the implementation of the tool.

  16. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    International Nuclear Information System (INIS)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity

  17. Searching for recursive causal structures in multivariate quantitative genetics mixed models.

    Science.gov (United States)

    Valente, Bruno D; Rosa, Guilherme J M; de Los Campos, Gustavo; Gianola, Daniel; Silva, Martinho A

    2010-06-01

    Biology is characterized by complex interactions between phenotypes, such as recursive and simultaneous relationships between substrates and enzymes in biochemical systems. Structural equation models (SEMs) can be used to study such relationships in multivariate analyses, e.g., with multiple traits in a quantitative genetics context. Nonetheless, the number of different recursive causal structures that can be used for fitting a SEM to multivariate data can be huge, even when only a few traits are considered. In recent applications of SEMs in mixed-model quantitative genetics settings, causal structures were preselected on the basis of prior biological knowledge alone. Therefore, the wide range of possible causal structures has not been properly explored. Alternatively, causal structure spaces can be explored using algorithms that, using data-driven evidence, can search for structures that are compatible with the joint distribution of the variables under study. However, the search cannot be performed directly on the joint distribution of the phenotypes as it is possibly confounded by genetic covariance among traits. In this article we propose to search for recursive causal structures among phenotypes using the inductive causation (IC) algorithm after adjusting the data for genetic effects. A standard multiple-trait model is fitted using Bayesian methods to obtain a posterior covariance matrix of phenotypes conditional to unobservable additive genetic effects, which is then used as input for the IC algorithm. As an illustrative example, the proposed methodology was applied to simulated data related to multiple traits measured on a set of inbred lines.

  18. Development and Validation of Quantitative Structure-Activity Relationship Models for Compounds Acting on Serotoninergic Receptors

    Directory of Open Access Journals (Sweden)

    Grażyna Żydek

    2012-01-01

    Full Text Available A quantitative structure-activity relationship (QSAR study has been made on 20 compounds with serotonin (5-HT receptor affinity. Thin-layer chromatographic (TLC data and physicochemical parameters were applied in this study. RP2 TLC 60F254 plates (silanized impregnated with solutions of propionic acid, ethylbenzene, 4-ethylphenol, and propionamide (used as analogues of the key receptor amino acids and their mixtures (denoted as S1–S7 biochromatographic models were used in two developing phases as a model of drug-5-HT receptor interaction. The semiempirical method AM1 (HyperChem v. 7.0 program and ACD/Labs v. 8.0 program were employed to calculate a set of physicochemical parameters for the investigated compounds. Correlation and multiple linear regression analysis were used to search for the best QSAR equations. The correlations obtained for the compounds studied represent their interactions with the proposed biochromatographic models. The good multivariate relationships (R2=0.78–0.84 obtained by means of regression analysis can be used for predicting the quantitative effect of biological activity of different compounds with 5-HT receptor affinity. “Leave-one-out” (LOO and “leave-N-out” (LNO cross-validation methods were used to judge the predictive power of final regression equations.

  19. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.

  20. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  1. Tumour-cell killing by X-rays and immunity quantitated in a mouse model system

    International Nuclear Information System (INIS)

    Porteous, D.D.; Porteous, K.M.; Hughes, M.J.

    1979-01-01

    As part of an investigation of the interaction of X-rays and immune cytotoxicity in tumour control, an experimental mouse model system has been used in which quantitative anti-tumour immunity was raised in prospective recipients of tumour-cell suspensions exposed to varying doses of X-rays in vitro before injection. Findings reported here indicate that, whilst X-rays kill a proportion of cells, induced immunity deals with a fixed number dependent upon the immune status of the host, and that X-rays and anti-tumour immunity do not act synergistically in tumour-cell killing. The tumour used was the ascites sarcoma BP8. (author)

  2. Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters

    Science.gov (United States)

    Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad

    2017-02-01

    The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.

  3. Web Applications Vulnerability Management using a Quantitative Stochastic Risk Modeling Method

    Directory of Open Access Journals (Sweden)

    Sergiu SECHEL

    2017-01-01

    Full Text Available The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed de-bugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk charac-teristics (risk attributes and evaluate the results after 1.5 million rounds of Monte Carlo simu-lations.

  4. Computer models experiences in radiological safety

    International Nuclear Information System (INIS)

    Ferreri, J.C.; Grandi, G.M.; Ventura, M.A.; Doval, A.S.

    1989-01-01

    A review in the formulation and use of numerical methods in fluid dynamics and heat and mass transfer in nuclear safety is presented. A wide range of applications is covered, namely: nuclear reactor's thermohydraulics, natural circulation in closed loops, experiments for the validation of numerical methods, thermohydraulics of fractured-porous media and radionuclide migration. The results of the experience accumulated is a research line dealing at the present with moving grids in computational fluid dynamics and the use of artificial intelligence techniques. As a consequence some recent experience in the development of expert systems and the considerations that should be taken into account for its use in radiological safety is also reviewed. (author)

  5. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  6. Quantitative study of Portland cement hydration by X-Ray diffraction/Rietveld analysis and geochemical modeling

    Science.gov (United States)

    Coutelot, F.; Seaman, J. C.; Simner, S.

    2017-12-01

    In this study the hydration of Portland cements containing blast-furnace slag and type V fly ash were investigated during cement curing using X-ray diffraction, with geochemical modeling used to calculate the total volume of hydrates. The goal was to evaluate the relationship between the starting component levels and the hydrate assemblages that develop during the curing process. Blast furnace-slag levels of 60, 45 and 30 wt.% were studied in blends containing fly ash and Portland cement. Geochemical modelling described the dissolution of the clinker, and predicted quantitatively the amount of hydrates. In all cases the experiments showed the presence of C-S-H, portlandite and ettringite. The quantities of ettringite, portlandite and the amorphous phases as determined by XRD agreed well with the calculated amounts of these phases after different periods of time. These findings show that changes in the bulk composition of hydrating cements can be described by geochemical models. Such a comparison between experimental and modelled data helps to understand in more detail the active processes occurring during cement hydration.

  7. Development of quantitative atomic modeling for tungsten transport study using LHD plasma with tungsten pellet injection

    Science.gov (United States)

    Murakami, I.; Sakaue, H. A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2015-09-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from plasmas of the large helical device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) emission of W24+ to W33+ ions at 1.5-3.5 nm are sensitive to electron temperature and useful to examine the tungsten behavior in edge plasmas. We can reproduce measured EUV spectra at 1.5-3.5 nm by calculated spectra with the tungsten atomic model and obtain charge state distributions of tungsten ions in LHD plasmas at different temperatures around 1 keV. Our model is applied to calculate the unresolved transition array (UTA) seen at 4.5-7 nm tungsten spectra. We analyze the effect of configuration interaction on population kinetics related to the UTA structure in detail and find the importance of two-electron-one-photon transitions between 4p54dn+1- 4p64dn-14f. Radiation power rate of tungsten due to line emissions is also estimated with the model and is consistent with other models within factor 2.

  8. Quantitative Structure-activity Relationship (QSAR) Models for Docking Score Correction.

    Science.gov (United States)

    Fukunishi, Yoshifumi; Yamasaki, Satoshi; Yasumatsu, Isao; Takeuchi, Koh; Kurosawa, Takashi; Nakamura, Haruki

    2017-01-01

    In order to improve docking score correction, we developed several structure-based quantitative structure activity relationship (QSAR) models by protein-drug docking simulations and applied these models to public affinity data. The prediction models used descriptor-based regression, and the compound descriptor was a set of docking scores against multiple (∼600) proteins including nontargets. The binding free energy that corresponded to the docking score was approximated by a weighted average of docking scores for multiple proteins, and we tried linear, weighted linear and polynomial regression models considering the compound similarities. In addition, we tried a combination of these regression models for individual data sets such as IC 50 , K i , and %inhibition values. The cross-validation results showed that the weighted linear model was more accurate than the simple linear regression model. Thus, the QSAR approaches based on the affinity data of public databases should improve docking scores. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  9. Quantitative computational models of molecular self-assembly in systems biology

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-06-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  10. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  11. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    Science.gov (United States)

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Mi...

  12. Modelling of the simple pendulum Experiment

    Directory of Open Access Journals (Sweden)

    Palka L.

    2016-01-01

    Full Text Available Abstract - work focuses on the design of the simulation embedded in remote experiment “Simple pendulum” built on the Internet School Experimental System (ISES. This platform is intended for wide educational purposes at schools and universities in order to provide the suitable measuring environment for students using conventional computing resources Informatics.

  13. Some Experiences with Numerical Modelling of Overflows

    DEFF Research Database (Denmark)

    Larsen, Torben; Nielsen, L.; Jensen, B.

    2007-01-01

    across the edge of the overflow. To ensure critical flow across the edge, the upstream flow must be subcritical whereas the downstream flow is either supercritical or a free jet. Experimentally overflows are well studied. Based on laboratory experiments and Froude number scaling, numerous accurate...

  14. Satellite Contributions to the Quantitative Characterization of Biomass Burning for Climate Modeling

    Science.gov (United States)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-01-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  15. Mathematical Modeling: Are Prior Experiences Important?

    Science.gov (United States)

    Czocher, Jennifer A.; Moss, Diana L.

    2017-01-01

    Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…

  16. Towards Generic Models of Player Experience

    DEFF Research Database (Denmark)

    Shaker, Noor; Shaker, Mohammad; Abou-Zleikha, Mohamed

    2015-01-01

    Context personalisation is a flourishing area of research with many applications. Context personalisation systems usually employ a user model to predict the appeal of the context to a particular user given a history of interactions. Most of the models used are context...

  17. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  18. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  19. Design of experiments an introduction based on linear models

    CERN Document Server

    Morris, Max D

    2011-01-01

    IntroductionExample: rainfall and grassland Basic elements of an experimentExperiments and experiment-like studies Models and data analysisLinear Statistical ModelsLinear vector spaces Basic linear model The hat matrix, least-squares estimates, and design information matrixThe partitioned linear model The reduced normal equations Linear and quadratic forms Estimation and information Hypothesis testing and informationBlocking and informationCompletely Randomized DesignsIntroductionModels Matrix formulation Influence of design on estimation Influence of design on hypothesis testingRandomized Com

  20. Quantitative evaluation and modeling of two-dimensional neovascular network complexity: the surface fractal dimension

    International Nuclear Information System (INIS)

    Grizzi, Fabio; Russo, Carlo; Colombo, Piergiuseppe; Franceschini, Barbara; Frezza, Eldo E; Cobos, Everardo; Chiriva-Internati, Maurizio

    2005-01-01

    Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. This paper introduces the surface fractal dimension (D s ) as a numerical index of the two-dimensional (2-D) geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. We show that D s significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth

  1. A Cytomorphic Chip for Quantitative Modeling of Fundamental Bio-Molecular Circuits.

    Science.gov (United States)

    2015-08-01

    We describe a 0.35 μm BiCMOS silicon chip that quantitatively models fundamental molecular circuits via efficient log-domain cytomorphic transistor equivalents. These circuits include those for biochemical binding with automatic representation of non-modular and loading behavior, e.g., in cascade and fan-out topologies; for representing variable Hill-coefficient operation and cooperative binding; for representing inducer, transcription-factor, and DNA binding; for probabilistic gene transcription with analogic representations of log-linear and saturating operation; for gain, degradation, and dynamics of mRNA and protein variables in transcription and translation; and, for faithfully representing biological noise via tunable stochastic transistor circuits. The use of on-chip DACs and ADCs enables multiple chips to interact via incoming and outgoing molecular digital data packets and thus create scalable biochemical reaction networks. The use of off-chip digital processors and on-chip digital memory enables programmable connectivity and parameter storage. We show that published static and dynamic MATLAB models of synthetic biological circuits including repressilators, feed-forward loops, and feedback oscillators are in excellent quantitative agreement with those from transistor circuits on the chip. Computationally intensive stochastic Gillespie simulations of molecular production are also rapidly reproduced by the chip and can be reliably tuned over the range of signal-to-noise ratios observed in biological cells.

  2. Quantitative Outline-based Shape Analysis and Classification of Planetary Craterforms using Supervised Learning Models

    Science.gov (United States)

    Slezak, Thomas Joseph; Radebaugh, Jani; Christiansen, Eric

    2017-10-01

    The shapes of craterform morphology on planetary surfaces provides rich information about their origins and evolution. While morphologic information provides rich visual clues to geologic processes and properties, the ability to quantitatively communicate this information is less easily accomplished. This study examines the morphology of craterforms using the quantitative outline-based shape methods of geometric morphometrics, commonly used in biology and paleontology. We examine and compare landforms on planetary surfaces using shape, a property of morphology that is invariant to translation, rotation, and size. We quantify the shapes of paterae on Io, martian calderas, terrestrial basaltic shield calderas, terrestrial ash-flow calderas, and lunar impact craters using elliptic Fourier analysis (EFA) and the Zahn and Roskies (Z-R) shape function, or tangent angle approach to produce multivariate shape descriptors. These shape descriptors are subjected to multivariate statistical analysis including canonical variate analysis (CVA), a multiple-comparison variant of discriminant analysis, to investigate the link between craterform shape and classification. Paterae on Io are most similar in shape to terrestrial ash-flow calderas and the shapes of terrestrial basaltic shield volcanoes are most similar to martian calderas. The shapes of lunar impact craters, including simple, transitional, and complex morphology, are classified with a 100% rate of success in all models. Multiple CVA models effectively predict and classify different craterforms using shape-based identification and demonstrate significant potential for use in the analysis of planetary surfaces.

  3. Modeling the Formation of Language: Embodied Experiments

    Science.gov (United States)

    Steels, Luc

    This chapter gives an overview of different experiments that have been performed to demonstrate how a symbolic communication system, including its underlying ontology, can arise in situated embodied interactions between autonomous agents. It gives some details of the Grounded Naming Game, which focuses on the formation of a system of proper names, the Spatial Language Game, which focuses on the formation of a lexicon for expressing spatial relations as well as perspective reversal, and an Event Description Game, which concerns the expression of the role of participants in events through an emergent case grammar. For each experiment, details are provided how the symbolic system emerges, how the interaction is grounded in the world through the embodiment of the agent and its sensori-motor processing, and how concepts are formed in tight interaction with the emerging language.

  4. Quantitative Circulatory Physiology: an integrative mathematical model of human physiology for medical education.

    Science.gov (United States)

    Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L

    2007-06-01

    We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.

  5. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  6. A pulsatile flow model for in vitro quantitative evaluation of prosthetic valve regurgitation

    Directory of Open Access Journals (Sweden)

    S. Giuliatti

    2000-03-01

    Full Text Available A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.

  7. Use of a plant level logic model for quantitative assessment of systems interactions

    International Nuclear Information System (INIS)

    Chu, B.B.; Rees, D.C.; Kripps, L.P.; Hunt, R.N.; Bradley, M.

    1985-01-01

    The Electric Power Research Institute (EPRI) has sponsored a research program to investigate methods for identifying systems interactions (SIs) and for the evaluation of their importance. Phase 1 of the EPRI research project focused on the evaluation of methods for identification of SIs. Major results of the Phase 1 activities are the documentation of four different methodologies for identification of potential SIs and development of guidelines for performing an effective plant walkdown in support of an SI analysis. Phase II of the project, currently being performed, is utilizing a plant level logic model of a pressurized water reactor (PWR) to determine the quantitative importance of identified SIs. In Phase II, previously reported events involving interactions between systems were screened and selected on the basis of their relevance to the Baltimore Gas and Electric (BGandE) Calvert Cliffs Nuclear Power Plant design and perceived potential safety significance. Selected events were then incorporated into the BGandE plant level GO logic model. The model is being exercised to calculate the relative importance of these events. Five previously identified event scenarios, extracted from licensee event reports (LERs) are being evaluated during the course of the study. A key feature of the approach being used in Phase II is the use of a logic model in a manner to effectively evaluate the impact of events on the system level and the plant level for the mitigation of transients. Preliminary study results indicate that the developed methodology can be a viable and effective means for determining the quantitative significance of SIs

  8. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  9. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    Science.gov (United States)

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  10. Evaluating the Impacts of Spatial Uncertainties in Quantitative Precipitation Estimation (QPE) Products on Flood Modelling

    Science.gov (United States)

    Gao, Z.; Wu, H.; Li, J.; Hong, Y.; Huang, J.

    2017-12-01

    Precipitation is often the major uncertainty source of hydrologic modelling, e.g., for flood simulation. The quantitative precipitation estimation (QPE) products when used as input for hydrologic modelling can cause significant difference in model performance because of the large variations in their estimation of precipitation intensity, duration, and spatial distribution. Objectively evaluating QPE and deriving the best estimation of precipitation at river basin scale, represent a bottleneck which has been faced by the hydrometeorological community, despite they are desired by many researches including flood simulation, such as the Global Flood Monitoring System using the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model (Wu et al., 2014). Recently we developed a Multiple-product-driven hydrological Modeling Framework (MMF) for objective evaluation of QPE products using the DRIVE model (Wu et al., 2017). In this study based on the MMF, we (1) compare location, spatial characteristics, and geometric patterns of precipitation among QPE products at various temporal scales by adopting an object-oriented method; (2) demonstrate their effects on flood magnitude and timing simulation through the DRIVE model; and (3) further investigate and understand how different precipitation spatial patterns evolute and result in difference in streamflow and flood peak (magnitude and timing), through a linear routing scheme which is employed to decompose the contribution of flood peak during rain-flood events. This study shows that there can be significant difference in spatial patterns of accumulated precipitation at various temporal scales (from days to hourly) among QPE products, which cause significant difference in flood simulation particularly in peak timing prediction. Therefore, the evaluation of spatial pattern of precipitation should be considered as an important part of the framework for objective evaluation of QPE and the derivation of the best

  11. Neutron transport model for standard calculation experiment

    International Nuclear Information System (INIS)

    Lukhminskij, B.E.; Lyutostanskij, Yu.S.; Lyashchuk, V.I.; Panov, I.V.

    1989-01-01

    The neutron transport calculation algorithms in complex composition media with a predetermined geometry are realized by the multigroups representations within Monte Carlo methods in the MAMONT code. The code grade was evaluated with benchmark experiments comparison. The neutron leakage spectra calculations in the spherical-symmetric geometry were carried out for iron and polyethylene. The MAMONT code utilization for metrological furnishes of the geophysics tasks is proposed. The code is orientated towards neutron transport and secondary nuclides accumulation calculations in blankets and geophysics media. 7 refs.; 2 figs

  12. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  13. Cohesive mixed mode fracture modelling and experiments

    DEFF Research Database (Denmark)

    Walter, Rasmus; Olesen, John Forbes

    2008-01-01

    A nonlinear mixed mode model originally developed by Wernersson [Wernersson H. Fracture characterization of wood adhesive joints. Report TVSM-1006, Lund University, Division of Structural Mechanics; 1994], based on nonlinear fracture mechanics, is discussed and applied to model interfacial cracking....... An experimental set-up for the assessment of mixed mode interfacial fracture properties is presented, applying a bi-material specimen, half steel and half concrete, with an inclined interface and under uniaxial load. Loading the inclined steel–concrete interface under different angles produces load–crack opening...... curves, which may be interpreted using the nonlinear mixed mode model. The interpretation of test results is carried out in a two step inverse analysis applying numerical optimization tools. It is demonstrated how to perform the inverse analysis, which couples the assumed individual experimental load...

  14. Silicon Carbide Derived Carbons: Experiments and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kertesz, Miklos [Georgetown University, Washington DC 20057

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  15. Need for collection of quantitative distribution data for dosimetry and metabolic modeling

    International Nuclear Information System (INIS)

    Lathrop, K.A.

    1976-01-01

    Problems in radiation dose distribution studies in humans are discussed. Data show the effective half-times for 7 Be and 75 Se in the mouse, rat, monkey, dog, and human show no correlation with weight, body surface, or other readily apparent factor that could be used to equate nonhuman and human data. Another problem sometimes encountered in attempting to extrapolate animal data to humans involves equivalent doses of the radiopharmaceutical. A usual human dose for a radiopharmaceutical is 1 ml or 0.017 mg/kg. The same solution injected into a mouse in a convenient volume of 0.1 ml results in a dose of 4 ml/kg or 240 times that received by the human. The effect on whole body retention produced by a dose difference of similar magnitude for selenium in the rat shows the retention is at least twice as great with the smaller amount. With the development of methods for the collection of data throughout the body representing the fractional distribution of radioactivity versus time, not only can more realistic dose estimates be made, but also the tools will be provided for the study of physiological and biochemical interrelationships in the intact subject from which compartmental models may be made which have diagnostic significance. The unique requirement for quantitative biologic data needed for calculation of radiation absorbed doses is the same as the unique scientific contribution that nuclear medicine can make, which is the quantitative in vivo study of physiologic and biochemical processes. The technique involved is not the same as quantitation of a radionuclide image, but is a step beyond

  16. Tree Root System Characterization and Volume Estimation by Terrestrial Laser Scanning and Quantitative Structure Modeling

    Directory of Open Access Journals (Sweden)

    Aaron Smith

    2014-12-01

    Full Text Available The accurate characterization of three-dimensional (3D root architecture, volume, and biomass is important for a wide variety of applications in forest ecology and to better understand tree and soil stability. Technological advancements have led to increasingly more digitized and automated procedures, which have been used to more accurately and quickly describe the 3D structure of root systems. Terrestrial laser scanners (TLS have successfully been used to describe aboveground structures of individual trees and stand structure, but have only recently been applied to the 3D characterization of whole root systems. In this study, 13 recently harvested Norway spruce root systems were mechanically pulled from the soil, cleaned, and their volumes were measured by displacement. The root systems were suspended, scanned with TLS from three different angles, and the root surfaces from the co-registered point clouds were modeled with the 3D Quantitative Structure Model to determine root architecture and volume. The modeling procedure facilitated the rapid derivation of root volume, diameters, break point diameters, linear root length, cumulative percentages, and root fraction counts. The modeled root systems underestimated root system volume by 4.4%. The modeling procedure is widely applicable and easily adapted to derive other important topological and volumetric root variables.

  17. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015

    Science.gov (United States)

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)—which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be ‘invaded’ by a newcomer third party very quickly, while the second remains immune to such invasion. PMID:27171226

  18. Non Linear Programming (NLP formulation for quantitative modeling of protein signal transduction pathways.

    Directory of Open Access Journals (Sweden)

    Alexander Mitsos

    Full Text Available Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i excessive CPU time requirements and ii loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  19. Antiproliferative Pt(IV) complexes: synthesis, biological activity, and quantitative structure-activity relationship modeling.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico

    2010-09-01

    Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.

  20. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    Science.gov (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  1. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    Science.gov (United States)

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  3. Evaporation experiments and modelling for glass melts

    NARCIS (Netherlands)

    Limpt, J.A.C. van; Beerkens, R.G.C.

    2007-01-01

    A laboratory test facility has been developed to measure evaporation rates of different volatile components from commercial and model glass compositions. In the set-up the furnace atmosphere, temperature level, gas velocity and batch composition are controlled. Evaporation rates have been measured

  4. [Experience of implementing a primary attention model].

    Science.gov (United States)

    Ruiz-Rodríguez, Myriam; Acosta-Ramírez, Naydú; Rodríguez Villamizar, Laura A; Uribe, Luz M; León-Franco, Martha

    2011-12-01

    Identifying barriers and dynamic factors in setting up a primary health care (PHC) model in the Santander department during the last decade. This was a qualitative study, focusing on pluralism and triangulating sources and actors, with a critical analysis of limits and judgments values (boundary critique). Philosophical/conceptual and operational management problems were found from the emergent categories related to appropriating PHC attributes. The theoretical model design was in fact not developed in practice. The PHC strategy is selective and state-led (at department level), focusing on rural interventions developed by nursing assistants and orientated towards fulfilling public health goals in the first healthcare level. Difficulties at national, state and local level were identified which could be useful in other national and international contexts. Structural healthcare system market barriers were the most important constraints since the model operates through the contractual logic of institutional segmentation and operational fragmentation. Human resource management focusing on skills, suitable local health management and systematic evaluation studies would thus be suggested as essential operational elements for facing the aforementioned problems and encourage an integral PHC model in Colombia.

  5. Model of an Evaporating Drop Experiment

    Science.gov (United States)

    Rodriguez, Nicolas

    2017-11-01

    A computational model of an experimental procedure to measure vapor distributions surrounding sessile drops is developed to evaluate the uncertainty in the experimental results. Methanol, which is expected to have predominantly diffusive vapor transport, is chosen as a validation test for our model. The experimental process first uses a Fourier transform infrared spectrometer to measure the absorbance along lines passing through the vapor cloud. Since the measurement contains some errors, our model allows adding random noises to the computational integrated absorbance to mimic this. Then the resulting data are interpolated before passing through a computed tomography routine to generate the vapor distribution. Next, the gradients of the vapor distribution are computed along a given control volume surrounding the drop so that the diffusive flux can be evaluated as the net rate of diffusion out of the control volume. Our model of methanol evaporation shows that the accumulated errors of the whole experimental procedure affect the diffusive fluxes at different control volumes and are sensitive to how the noisy data of integrated absorbance are interpolated. This indicates the importance of investigating a variety of data fitting methods to choose which is best to present the data. Trinity University Mach Fellowship.

  6. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  7. Quantitative resonant soft x-ray reflectivity of ultrathin anisotropic organic layers: Simulation and experiment of PTCDA on Au

    Energy Technology Data Exchange (ETDEWEB)

    Capelli, R.; Koshmak, K.; Giglia, A.; Mukherjee, S.; Nannarone, S. [IOM-CNR, s.s. 14, Km. 163.5 in AREA Science Park, Basovizza, 34149 Trieste (Italy); Mahne, N. [Elettra, s.s. 14, km 163.5 in AREA Science Park, Basovizza, 34149 Trieste (Italy); Doyle, B. P. [Department of Physics, University of Johannesburg, P.O. Box 524, Auckland Park 2006 (South Africa); Pasquali, L., E-mail: luca.pasquali@unimore.it [IOM-CNR, s.s. 14, Km. 163.5 in AREA Science Park, Basovizza, 34149 Trieste (Italy); Department of Physics, University of Johannesburg, P.O. Box 524, Auckland Park 2006 (South Africa); Dipartimento di Ingegneria “Enzo Ferrari,” Università di Modena e Reggio Emilia, Via Vignolese 905, 41125 Modena (Italy)

    2016-07-14

    Resonant soft X-ray reflectivity at the carbon K edge, with linearly polarized light, was used to derive quantitative information of film morphology, molecular arrangement, and electronic orbital anisotropies of an ultrathin 3,4,9,10-perylene tetracarboxylic dianhydride (PTCDA) film on Au(111). The experimental spectra were simulated by computing the propagation of the electromagnetic field in a trilayer system (vacuum/PTCDA/Au), where the organic film was treated as an anisotropic medium. Optical constants were derived from the calculated (through density functional theory) absorption cross sections of the single molecule along the three principal molecular axes. These were used to construct the dielectric tensor of the film, assuming the molecules to be lying flat with respect to the substrate and with a herringbone arrangement parallel to the substrate plane. Resonant soft X-ray reflectivity proved to be extremely sensitive to film thickness, down to the single molecular layer. The best agreement between simulation and experiment was found for a film of 1.6 nm, with flat laying configuration of the molecules. The high sensitivity to experimental geometries in terms of beam incidence and light polarization was also clarified through simulations. The optical anisotropies of the organic film were experimentally determined and through the comparison with calculations, it was possible to relate them to the orbital symmetry of the empty electronic states.

  8. Absolute copy number from the statistics of the quantification cycle in replicate quantitative polymerase chain reaction experiments.

    Science.gov (United States)

    Tellinghuisen, Joel; Spiess, Andrej-Nikolai

    2015-02-03

    The quantification cycle (Cq) is widely used for calibration in real-time quantitative polymerase chain reaction (qPCR), to estimate the initial amount, or copy number (N0), of the target DNA. Cq may be defined several ways, including the cycle where the detected fluorescence achieves a prescribed threshold level. For all methods of defining Cq, the standard deviation from replicate experiments is typically much greater than the estimated standard errors from the least-squares fits used to obtain Cq. For moderate-to-large copy number (N0 > 10(2)), pipet volume uncertainty and variability in the amplification efficiency (E) likely account for most of the excess variance in Cq. For small N0, the dispersion of Cq is determined by the Poisson statistics of N0, which means that N0 can be estimated directly from the variance of Cq. The estimation precision is determined by the statistical properties of χ(2), giving a relative standard deviation of ∼(2/n)(1/2), where n is the number of replicates, for example, a 20% standard deviation in N0 from 50 replicates.

  9. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  10. Thermal Properties of Metallic Nanowires: Modeling & Experiment

    Science.gov (United States)

    Stojanovic, Nenad; Berg, Jordan; Maithripala, Sanjeeva; Holtz, Mark

    2009-10-01

    Effects such as surface and grain boundary scattering significantly influence electrical and thermal properties of nanoscale materials with important practical implications for current and future electronics and photonics. Conventional wisdom for metals holds that thermal transport is predominantly by electrons and transport by phonons is negligible. This assumption is used to justify the use of the Wiedemann-Franz law to infer thermal conductivity based on measurements of electrical resistivity. Recently experiments suggest a breakdown of the Wiedemann-Franz law at the nanoscale. This talk will examine the assumption that thermal transport by phonons can be neglected. The electrical resistivities and thermal conductivities of aluminum nanowires of various sizes are directly measured. These values are used in conjunction with the Boltzmann transport equation to conclude that the Wiedemann-Franz law describes the electronic component of thermal conductivity, but that the phonon term must also be considered. A novel experimental device is described for the direct thermal conductivity measurements.

  11. Adhesive behaviour of gecko-inspired nanofibrillar arrays: combination of experiments and finite element modelling

    International Nuclear Information System (INIS)

    Wang Zhengzhi; Xu Yun; Gu Ping

    2012-01-01

    A polypropylene nanofibrillar array was successfully fabricated by template-assisted nanofabrication strategy. Adhesion properties of this gecko-inspired structure were studied through two parallel and independent approaches: experiments and finite element simulations. Experimental results show relatively good normal adhesion, but accompanied by high preloads. The interfacial adhesion was modelled by effective spring elements with piecewise-linear constitution. The effective elasticity of the fibre-array system was originally calculated from our measured elasticity of single nanowire. Comparisons of the experimental and simulative results reveal quantitative agreement except for some explainable deviations, which suggests the potential applicability of the present models and applied theories. (fast track communication)

  12. Micro- and nanoflows modeling and experiments

    CERN Document Server

    Rudyak, Valery Ya; Maslov, Anatoly A; Minakov, Andrey V; Mironov, Sergey G

    2018-01-01

    This book describes physical, mathematical and experimental methods to model flows in micro- and nanofluidic devices. It takes in consideration flows in channels with a characteristic size between several hundreds of micrometers to several nanometers. Methods based on solving kinetic equations, coupled kinetic-hydrodynamic description, and molecular dynamics method are used. Based on detailed measurements of pressure distributions along the straight and bent microchannels, the hydraulic resistance coefficients are refined. Flows of disperse fluids (including disperse nanofluids) are considered in detail. Results of hydrodynamic modeling of the simplest micromixers are reported. Mixing of fluids in a Y-type and T-type micromixers is considered. The authors present a systematic study of jet flows, jets structure and laminar-turbulent transition. The influence of sound on the microjet structure is considered. New phenomena associated with turbulization and relaminarization of the mixing layer of microjets are di...

  13. The JBEI quantitative metabolic modeling library (jQMM): a python library for modeling microbial metabolism

    DEFF Research Database (Denmark)

    Birkel, Garrett W.; Ghosh, Amit; Kumar, Vinay S.

    2017-01-01

    analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed.Results: The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes...

  14. Turbulent Boundary Layers - Experiments, Theory and Modelling

    Science.gov (United States)

    1980-01-01

    1979 "Calcul des transferts thermiques entre film chaud et substrat par un modele ä deux dimensions", Int. J. Heat Mass Transfer ^2, p. 111-119...surface heat transfer a to the surface shear Cu/ ; here, corrections are compulsory because the wall shear,stress fluctuations are large (the r.m.s...technique is the mass transfer analogue of the constant temperature anemometer when the chemical reaction at the electrode embedded in the wall is

  15. Previous Experience a Model of Practice UNAE

    OpenAIRE

    Ruiz, Ormary Barberi; Pesántez Palacios, María Dolores

    2017-01-01

    The statements presented in this article represents a preliminary version of the proposed model of pre-professional practices (PPP) of the National University of Education (UNAE) of Ecuador, an urgent institutional necessity is revealed in the descriptive analyzes conducted from technical support - administrative (reports, interviews, testimonials), pedagogical foundations of UNAE (curricular directionality, transverse axes in practice, career plan, approach and diagnostic examination as subj...

  16. Optimal Experiment Design for Monoexponential Model Fitting: Application to Apparent Diffusion Coefficient Imaging.

    Science.gov (United States)

    Alipoor, Mohammad; Maier, Stephan E; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik

    2015-01-01

    The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters.

  17. Nonlinear quantitative radiation sensitivity prediction model based on NCI-60 cancer cell lines.

    Science.gov (United States)

    Zhang, Chunying; Girard, Luc; Das, Amit; Chen, Sun; Zheng, Guangqiang; Song, Kai

    2014-01-01

    We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT) related genes were selected by significance analysis of microarrays (SAM). Orthogonal latent variables (LVs) were then extracted by the partial least squares (PLS) method as the new compressive input variables. Finally, support vector machine (SVM) regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray) values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a) reducing the root mean square error (RMSE) of the radiation sensitivity prediction model from 0.20 to 0.011; and (b) improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  18. Nonlinear Quantitative Radiation Sensitivity Prediction Model Based on NCI-60 Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Chunying Zhang

    2014-01-01

    Full Text Available We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT related genes were selected by significance analysis of microarrays (SAM. Orthogonal latent variables (LVs were then extracted by the partial least squares (PLS method as the new compressive input variables. Finally, support vector machine (SVM regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a reducing the root mean square error (RMSE of the radiation sensitivity prediction model from 0.20 to 0.011; and (b improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  19. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    Science.gov (United States)

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  20. Data-driven interdisciplinary mathematical modelling quantitatively unveils competition dynamics of co-circulating influenza strains.

    Science.gov (United States)

    Ho, Bin-Shenq; Chao, Kun-Mao

    2017-07-28

    Co-circulation of influenza strains is common to seasonal epidemics and pandemic emergence. Competition was considered involved in the vicissitudes of co-circulating influenza strains but never quantitatively studied at the human population level. The main purpose of the study was to explore the competition dynamics of co-circulating influenza strains in a quantitative way. We constructed a heterogeneous dynamic transmission model and ran the model to fit the weekly A/H1N1 influenza virus isolation rate through an influenza season. The construction process started on the 2007-2008 single-clade influenza season and, with the contribution from the clade-based A/H1N1 epidemiological curves, advanced to the 2008-2009 two-clade influenza season. Pearson method was used to estimate the correlation coefficient between the simulated epidemic curve and the observed weekly A/H1N1 influenza virus isolation rate curve. The model found the potentially best-fit simulation with correlation coefficient up to 96% and all the successful simulations converging to the best-fit. The annual effective reproductive number of each co-circulating influenza strain was estimated. We found that, during the 2008-2009 influenza season, the annual effective reproductive number of the succeeding A/H1N1 clade 2B-2, carrying H275Y mutation in the neuraminidase, was estimated around 1.65. As to the preceding A/H1N1 clade 2C-2, the annual effective reproductive number would originally be equivalent to 1.65 but finally took on around 0.75 after the emergence of clade 2B-2. The model reported that clade 2B-2 outcompeted for the 2008-2009 influenza season mainly because clade 2C-2 suffered from a reduction of transmission fitness of around 71% on encountering the former. We conclude that interdisciplinary data-driven mathematical modelling could bring to light the transmission dynamics of the A/H1N1 H275Y strains during the 2007-2009 influenza seasons worldwide and may inspire us to tackle the

  1. Integrated modeling of tokamak experiments with OMFIT

    International Nuclear Information System (INIS)

    Meneghini, Orso; Lao, Lang

    2013-01-01

    One Modeling Framework for Integrated Tasks (OMFIT) is a framework that allows data to be easily exchanged among different codes by providing a unifying data structure. The main idea at the base of OMFIT is to treat files, data and scripts as a uniform collection of objects organized into a tree structure, which provides a consistent way to access and manipulate such collection of heterogeneous objects, independent of their origin. Within the OMFIT tree, data can be copied/referred from one node to another and tasks can call each other allowing for complex compound task to be built. A top-level Graphical User Interface (GUI) allowing users to manage tree objects, carry out simulations and analyze the data either interactively or in batch. OMFIT supports many scientific data formats and when a file is loaded into the framework, its data populates the tree structure, automatically endowing it with many potential uses. Furthermore, seamless integration with experimental management systems allows direct manipulation of their data. In OMFIT modeling tasks are organized into modules, which can be easily combined to create arbitrarily-large multi-physics simulations. Modules inter-dependencies are seamlessly defined by variables referencing tree locations among them. Creation of new modules and customization of existing ones is encouraged by graphical tools for their management and an online repository. High level Application Programmer Interfaces (APIs) enable users to execute their codes on remote servers and creation application-specific GUIs. Finally, within OMFIT it is possible to visualize experimental and modeling data for both quick analysis and publication purposes. Examples of application to the DIII-D tokamak are presented. (author)

  2. Diffusion-weighted MRI and quantitative biophysical modeling of hippocampal neurite loss in chronic stress.

    Directory of Open Access Journals (Sweden)

    Peter Vestergaard-Poulsen

    Full Text Available Chronic stress has detrimental effects on physiology, learning and memory and is involved in the development of anxiety and depressive disorders. Besides changes in synaptic formation and neurogenesis, chronic stress also induces dendritic remodeling in the hippocampus, amygdala and the prefrontal cortex. Investigations of dendritic remodeling during development and treatment of stress are currently limited by the invasive nature of histological and stereological methods. Here we show that high field diffusion-weighted MRI combined with quantitative biophysical modeling of the hippocampal dendritic loss in 21 day restraint stressed rats highly correlates with former histological findings. Our study strongly indicates that diffusion-weighted MRI is sensitive to regional dendritic loss and thus a promising candidate for non-invasive studies of dendritic plasticity in chronic stress and stress-related disorders.

  3. Indian Consortia Models: FORSA Libraries' Experiences

    Science.gov (United States)

    Patil, Y. M.; Birdie, C.; Bawdekar, N.; Barve, S.; Anilkumar, N.

    2007-10-01

    With increases in prices of journals, shrinking library budgets and cuts in subscriptions to journals over the years, there has been a big challenge facing Indian library professionals to cope with the proliferation of electronic information resources. There have been sporadic efforts by different groups of libraries in forming consortia at different levels. The types of consortia identified are generally based on various models evolved in India in a variety of forms depending upon the participants' affiliations and funding sources. Indian astronomy library professionals have formed a group called Forum for Resource Sharing in Astronomy and Astrophysics (FORSA), which falls under `Open Consortia', wherein participants are affiliated to different government departments. This is a model where professionals willingly come forward and actively support consortia formation; thereby everyone benefits. As such, FORSA has realized four consortia, viz. Nature Online Consortium; Indian Astrophysics Consortium for physics/astronomy journals of Springer/Kluwer; Consortium for Scientific American Online Archive (EBSCO); and Open Consortium for Lecture Notes in Physics (Springer), which are discussed briefly.

  4. Image guided interstitial laser thermotherapy: a canine model evaluated by magnetic resonance imaging and quantitative autoradiography.

    Science.gov (United States)

    Muacevic, A; Peller, M; Ruprecht, L; Berg, D; Fend, L; Sroka, R; Reulen, H J; Reiser, M; Tonn, J Ch; Kreth, F W

    2005-02-01

    To determine the applicability and safety of a new canine model suitable for correlative magnetic resonance imaging (MRI) studies and morphological/pathophysiological examination over time after interstitial laser thermotherapy (ILTT) in brain tissue. A laser fibre (Diode Laser 830 nm) with an integrated temperature feedback system was inserted into the right frontal white matter in 18 dogs using frameless navigation technique. MRI thermometry (phase mapping i.e. chemical shift of the proton resonance frequency) during interstitial heating was compared to simultaneously recorded interstitial fiberoptic temperature readings on the border of the lesion. To study brain capillary function in response to ILTT over time quantitative autoradiography was performed investigating the unidirectional blood-to-tissue transport of carbon-14-labelled alpha amino-isobutyric acid (transfer constant K of AIB) 12, 36 hours, 7, 14 days, 4 weeks and 3 months after ILTT. All laser procedures were well tolerated, laser and temperature fibres could be adequately placed in the right frontal lobe in all animals. In 5 animals MRI-based temperature quantification correlated strongly to invasive temperature measurements. In the remaining animals the temperature fibre was located in the area of susceptibility artifacts, therefore, no temperature correlation was possible. The laser lesions consisted of a central area of calcified necrosis which was surrounded by an area of reactive brain tissue with increased permeability. Quantitative autoradiography indicated a thin and spherical blood brain barrier lesion. The magnitude of K of AIB increased from 12 hours to 14 days after ILTT and decreased thereafter. The mean value of K of AIB was 19 times (2 times) that of normal white matter (cortex), respectively. ILTT causes transient, highly localised areas of increased capillary permeability surrounding the laser lesion. Phase contrast imaging for MRI thermomonitoring can currently not be used for

  5. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.

    Science.gov (United States)

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.

  6. Rapid Method Development in Hydrophilic Interaction Liquid Chromatography for Pharmaceutical Analysis Using a Combination of Quantitative Structure-Retention Relationships and Design of Experiments.

    Science.gov (United States)

    Taraji, Maryam; Haddad, Paul R; Amos, Ruth I J; Talebi, Mohammad; Szucs, Roman; Dolan, John W; Pohl, Chris A

    2017-02-07

    A design-of-experiment (DoE) model was developed, able to describe the retention times of a mixture of pharmaceutical compounds in hydrophilic interaction liquid chromatography (HILIC) under all possible combinations of acetonitrile content, salt concentration, and mobile-phase pH with R 2 > 0.95. Further, a quantitative structure-retention relationship (QSRR) model was developed to predict retention times for new analytes, based only on their chemical structures, with a root-mean-square error of prediction (RMSEP) as low as 0.81%. A compound classification based on the concept of similarity was applied prior to QSRR modeling. Finally, we utilized a combined QSRR-DoE approach to propose an optimal design space in a quality-by-design (QbD) workflow to facilitate the HILIC method development. The mathematical QSRR-DoE model was shown to be highly predictive when applied to an independent test set of unseen compounds in unseen conditions with a RMSEP value of 5.83%. The QSRR-DoE computed retention time of pharmaceutical test analytes and subsequently calculated separation selectivity was used to optimize the chromatographic conditions for efficient separation of targets. A Monte Carlo simulation was performed to evaluate the risk of uncertainty in the model's prediction, and to define the design space where the desired quality criterion was met. Experimental realization of peak selectivity between targets under the selected optimal working conditions confirmed the theoretical predictions. These results demonstrate how discovery of optimal conditions for the separation of new analytes can be accelerated by the use of appropriate theoretical tools.

  7. Quantitative assessments of mantle flow models against seismic observations: Influence of uncertainties in mineralogical parameters

    Science.gov (United States)

    Schuberth, Bernhard S. A.

    2017-04-01

    One of the major challenges in studies of Earth's deep mantle is to bridge the gap between geophysical hypotheses and observations. The biggest dataset available to investigate the nature of mantle flow are recordings of seismic waveforms. On the other hand, numerical models of mantle convection can be simulated on a routine basis nowadays for earth-like parameters, and modern thermodynamic mineralogical models allow us to translate the predicted temperature field to seismic structures. The great benefit of the mineralogical models is that they provide the full non-linear relation between temperature and seismic velocities and thus ensure a consistent conversion in terms of magnitudes. This opens the possibility for quantitative assessments of the theoretical predictions. The often-adopted comparison between geodynamic and seismic models is unsuitable in this respect owing to the effects of damping, limited resolving power and non-uniqueness inherent to tomographic inversions. The most relevant issue, however, is related to wavefield effects that reduce the magnitude of seismic signals (e.g., traveltimes of waves), a phenomenon called wavefront healing. Over the past couple of years, we have developed an approach that takes the next step towards a quantitative assessment of geodynamic models and that enables us to test the underlying geophysical hypotheses directly against seismic observations. It is based solely on forward modelling and warrants a physically correct treatment of the seismic wave equation without theoretical approximations. Fully synthetic 3-D seismic wavefields are computed using a spectral element method for 3-D seismic structures derived from mantle flow models. This way, synthetic seismograms are generated independent of any seismic observations. Furthermore, through the wavefield simulations, it is possible to relate the magnitude of lateral temperature variations in the dynamic flow simulations directly to body-wave traveltime residuals. The

  8. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  9. A quantitative model for dermal infection and oedema in BALB/c mice pinna.

    Science.gov (United States)

    Marino-Marmolejo, Erika Nahomy; Flores-Hernández, Flor Yohana; Flores-Valdez, Mario Alberto; García-Morales, Luis Felipe; González-Villegas, Ana Cecilia; Bravo-Madrigal, Jorge

    2016-12-12

    Pharmaceutical industry demands innovation for developing new molecules to improve effectiveness and safety of therapeutic medicines. Preclinical assays are the first tests performed to evaluate new therapeutic molecules using animal models. Currently, there are several models for evaluation of treatments, for dermal oedema or infection. However, the most common or usual way is to induce the inflammation with chemical substances instead of infectious agents. On the other hand, this kind of models require the implementation of histological techniques and the interpretation of pathologies to verify the effectiveness of the therapy under assessment. This work was focused on developing a quantitative model of infection and oedema in mouse pinna. The infection was achieved with a strain of Streptococcus pyogenes that was inoculated in an injury induced at the auricle of BALB/c mice, the induced oedema was recorded by measuring the ear thickness with a digital micrometer and histopathological analysis was performed to verify the damage. The presence of S. pyogenes at the infection site was determined every day by culture. Our results showed that S. pyogenes can infect the mouse pinna and that it can be recovered at least for up to 4 days from the infected site; we also found that S. pyogenes can induce a bigger oedema than the PBS-treated control for at least 7 days; our results were validated with an antibacterial and anti-inflammatory formulation made with ciprofloxacin and hydrocortisone. The model we developed led us to emulate a dermal infection and allowed us to objectively evaluate the increase or decrease of the oedema by measuring the thickness of the ear pinna, and to determine the presence of the pathogen in the infection site. We consider that the model could be useful for assessment of new anti-inflammatory or antibacterial therapies for dermal infections.

  10. Validation of quantitative structure-activity relationship (QSAR) model for photosensitizer activity prediction.

    Science.gov (United States)

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M; Rahman, Noorsaadah Abd

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r(2) value, r(2) (CV) value and r(2) prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC(50) values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r(2) prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set.

  11. Validation of Quantitative Structure-Activity Relationship (QSAR) Model for Photosensitizer Activity Prediction

    Science.gov (United States)

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M.; Rahman, Noorsaadah Abd.

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r2 value, r2 (CV) value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set. PMID:22272096

  12. Hydrodynamics of Explosion Experiments and Models

    CERN Document Server

    Kedrinskii, Valery K

    2005-01-01

    Hydronamics of Explosion presents the research results for the problems of underwater explosions and contains a detailed analysis of the structure and the parameters of the wave fields generated by explosions of cord and spiral charges, a description of the formation mechanisms for a wide range of cumulative flows at underwater explosions near the free surface, and the relevant mathematical models. Shock-wave transformation in bubbly liquids, shock-wave amplification due to collision and focusing, and the formation of bubble detonation waves in reactive bubbly liquids are studied in detail. Particular emphasis is placed on the investigation of wave processes in cavitating liquids, which incorporates the concepts of the strength of real liquids containing natural microinhomogeneities, the relaxation of tensile stress, and the cavitation fracture of a liquid as the inversion of its two-phase state under impulsive (explosive) loading. The problems are classed among essentially nonlinear processes that occur unde...

  13. The role of quantitative systems pharmacology modeling in the prediction and explanation of idiosyncratic drug-induced liver injury.

    Science.gov (United States)

    Woodhead, Jeffrey L; Watkins, Paul B; Howell, Brett A; Siler, Scott Q; Shoda, Lisl K M

    2017-02-01

    Idiosyncratic drug-induced liver injury (iDILI) is a serious concern in drug development. The rarity and multifactorial nature of iDILI makes it difficult to predict and explain. Recently, human leukocyte antigen (HLA) allele associations have provided strong support for a role of an adaptive immune response in the pathogenesis of many iDILI cases; however, it is likely that an adaptive immune attack requires several preceding events. Quantitative systems pharmacology (QSP), an in silico modeling technique that leverages known physiology and the results of in vitro experiments in order to make predictions about how drugs affect biological processes, is proposed as a potentially useful tool for predicting and explaining critical events that likely precede immune-mediated iDILI, as well as the immune attack itself. DILIsym, a QSP platform for drug-induced liver injury, has demonstrated success in predicting the presence of delayed hepatocellular stress events that likely precede the iDILI cascade, and has successfully predicted hepatocellular stress likely underlying iDILI attributed to troglitazone and tolvaptan. The incorporation of a model of the adaptive immune system into DILIsym would represent and important advance. In summary, QSP methods can play a key role in the future prediction and understanding of both immune-mediated and non-immune-mediated iDILI. Copyright © 2016 The Japanese Society for the Study of Xenobiotics. Published by Elsevier Ltd. All rights reserved.

  14. Modified Ashworth Scale (MAS) Model based on Clinical Data Measurement towards Quantitative Evaluation of Upper Limb Spasticity

    Science.gov (United States)

    Puzi, A. Ahmad; Sidek, S. N.; Mat Rosly, H.; Daud, N.; Yusof, H. Md

    2017-11-01

    Spasticity is common symptom presented amongst people with sensorimotor disabilities. Imbalanced signals from the central nervous systems (CNS) which are composed of the brain and spinal cord to the muscles ultimately leading to the injury and death of motor neurons. In clinical practice, the therapist assesses muscle spasticity using a standard assessment tool like Modified Ashworth Scale (MAS), Modified Tardiue Scale (MTS) or Fugl-Meyer Assessment (FMA). This is done subjectively based on the experience and perception of the therapist subjected to the patient fatigue level and body posture. However, the inconsistency in the assessment is prevalent and could affect the efficacy of the rehabilitation process. Thus, the aim of this paper is to describe the methodology of data collection and the quantitative model of MAS developed to satisfy its description. Two subjects with MAS of 2 and 3 spasticity levels were involved in the clinical data measurement. Their level of spasticity was verified by expert therapist using current practice. Data collection was established using mechanical system equipped with data acquisition system and LABVIEW software. The procedure engaged repeated series of flexion of the affected arm that was moved against the platform using a lever mechanism and performed by the therapist. The data was then analyzed to investigate the characteristics of spasticity signal in correspondence to the MAS description. Experimental results revealed that the methodology used to quantify spasticity satisfied the MAS tool requirement according to the description. Therefore, the result is crucial and useful towards the development of formal spasticity quantification model.

  15. "ABC's Earthquake" (Experiments and models in seismology)

    Science.gov (United States)

    Almeida, Ana

    2017-04-01

    Ana Almeida, Portugal Almeida, Ana Escola Básica e Secundária Dr. Vieira de Carvalho Moreira da Maia, Portugal The purpose of this presentation, in poster format, is to disclose an activity which was planned and made by me, in a school on the north of Portugal, using a kit of materials simple and easy to use - the sismo-box. The activity "ABC's Earthquake" was developed under the discipline of Natural Sciences, with students from 7th grade, geosciences teachers and other areas. The possibility of work with the sismo-box was seen as an exciting and promising opportunity to promote science, seismology more specifically, to do science, when using the existing models in the box and with them implement the scientific method, to work and consolidate content and skills in the area of Natural Sciences, to have a time of sharing these materials with classmates, and also with other teachers from the different areas. Throughout the development of the activity, either with students or teachers, it was possible to see the admiration by the models presented in the earthquake-box, as well as, the interest and the enthusiasm in wanting to move and understand what the results after the proposed procedure in the script. With this activity, we managed to promote: - educational success in this subject; a "school culture" with active participation, with quality, rules, discipline and citizenship values; fully integration of students with special educational needs; strengthen the performance of the school as a cultural, informational and formation institution; provide activities to date and innovative; foment knowledge "to be, being and doing" and contribute to a moment of joy and discovery.Learn by doing!

  16. Model building experiences using Garp3: problems, patterns and debugging

    NARCIS (Netherlands)

    Liem, J.; Linnebank, F.E.; Bredeweg, B.; Žabkar, J.; Bratko, I.

    2009-01-01

    Capturing conceptual knowledge in QR models is becoming of interest to a larger audience of domain experts. Consequently, we have been training several groups to effectively create QR models during the last few years. In this paper we describe our teaching experiences, the issues the modellers

  17. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions

    Directory of Open Access Journals (Sweden)

    Teresa eLehnert

    2015-06-01

    Full Text Available Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM, because this level of model complexity allows estimating textit{a priori} unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e. least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment.

  18. MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments

    NARCIS (Netherlands)

    Bustin, S.A.; Beaulieu, J.F.; Huggett, J.; Jaggi, R.; Kibenge, F.S.; Olsvik, P.A.; Penning, L.C.|info:eu-repo/dai/nl/110369181; Toegel, S.

    2010-01-01

    MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments Stephen A Bustin1 , Jean-François Beaulieu2 , Jim Huggett3 , Rolf Jaggi4 , Frederick SB Kibenge5 , Pål A Olsvik6 , Louis C Penning7 and Stefan Toegel8 1 Centre for

  19. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    Science.gov (United States)

    Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.

    2017-10-01

    The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.

  20. Using Modified Contour Deformable Model to Quantitatively Estimate Ultrasound Parameters for Osteoporosis Assessment

    Science.gov (United States)

    Chen, Yung-Fu; Du, Yi-Chun; Tsai, Yi-Ting; Chen, Tainsong

    Osteoporosis is a systemic skeletal disease, which is characterized by low bone mass and micro-architectural deterioration of bone tissue, leading to bone fragility. Finding an effective method for prevention and early diagnosis of the disease is very important. Several parameters, including broadband ultrasound attenuation (BUA), speed of sound (SOS), and stiffness index (STI), have been used to measure the characteristics of bone tissues. In this paper, we proposed a method, namely modified contour deformable model (MCDM), bases on the active contour model (ACM) and active shape model (ASM) for automatically detecting the calcaneus contour from quantitative ultrasound (QUS) parametric images. The results show that the difference between the contours detected by the MCDM and the true boundary for the phantom is less than one pixel. By comparing the phantom ROIs, significant relationship was found between contour mean and bone mineral density (BMD) with R=0.99. The influence of selecting different ROI diameters (12, 14, 16 and 18 mm) and different region-selecting methods, including fixed region (ROI fix ), automatic circular region (ROI cir ) and calcaneal contour region (ROI anat ), were evaluated for testing human subjects. Measurements with large ROI diameters, especially using fixed region, result in high position errors (10-45%). The precision errors of the measured ultrasonic parameters for ROI anat are smaller than ROI fix and ROI cir . In conclusion, ROI anat provides more accurate measurement of ultrasonic parameters for the evaluation of osteoporosis and is useful for clinical application.

  1. Effect of arterial deprivation on growing femoral epiphysis: Quantitative magnetic resonance imaging using a piglet model

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Jung Eun; Yoo, Won Joon; Kim, In One; Kim, Woo Sun; Choi, Young Hun [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2015-06-15

    To investigate the usefulness of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion MRI for the evaluation of femoral head ischemia. Unilateral femoral head ischemia was induced by selective embolization of the medial circumflex femoral artery in 10 piglets. All MRIs were performed immediately (1 hour) and after embolization (1, 2, and 4 weeks). Apparent diffusion coefficients (ADCs) were calculated for the femoral head. The estimated pharmacokinetic parameters (Kep and Ve from two-compartment model) and semi-quantitative parameters including peak enhancement, time-to-peak (TTP), and contrast washout were evaluated. The epiphyseal ADC values of the ischemic hip decreased immediately (1 hour) after embolization. However, they increased rapidly at 1 week after embolization and remained elevated until 4 weeks after embolization. Perfusion MRI of ischemic hips showed decreased epiphyseal perfusion with decreased Kep immediately after embolization. Signal intensity-time curves showed delayed TTP with limited contrast washout immediately post-embolization. At 1-2 weeks after embolization, spontaneous reperfusion was observed in ischemic epiphyses. The change of ADC (p = 0.043) and Kep (p = 0.043) were significantly different between immediate (1 hour) after embolization and 1 week post-embolization. Diffusion MRI and pharmacokinetic model obtained from the DCE-MRI are useful in depicting early changes of perfusion and tissue damage using the model of femoral head ischemia in skeletally immature piglets.

  2. A functional-structural model of rice linking quantitative genetic information with morphological development and physiological processes

    NARCIS (Netherlands)

    Xu, L.F.; Henke, M.; Zhu, J.; Kurth, W.; Buck-Sorlin, G.H.

    2011-01-01

    Background and Aims Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype-phenotype model, we present here a

  3. Quantitative structure-activity relationship modeling of the toxicity of organothiophosphate pesticides to Daphnia magna and Cyprinus carpio

    NARCIS (Netherlands)

    Zvinavashe, E.; Du, T.; Griff, T.; Berg, van den J.H.J.; Soffers, A.E.M.F.; Vervoort, J.J.M.; Murk, A.J.; Rietjens, I.

    2009-01-01

    Within the REACH regulatory framework in the EU, quantitative structure-activity relationships (QSAR) models are expected to help reduce the number of animals used for experimental testing. The objective of this study was to develop QSAR models to describe the acute toxicity of organothiophosphate

  4. Quantitative analysis of surface deformation and ductile flow in complex analogue geodynamic models based on PIV method.

    Science.gov (United States)

    Krýza, Ondřej; Lexa, Ondrej; Závada, Prokop; Schulmann, Karel; Gapais, Denis; Cosgrove, John

    2017-04-01

    Recently, a PIV (particle image velocimetry) analysis method is optical method abundantly used in many technical branches where material flow visualization and quantification is important. Typical examples are studies of liquid flow through complex channel system, gas spreading or combustion problematics. In our current research we used this method for investigation of two types of complex analogue geodynamic and tectonic experiments. First class of experiments is aimed to model large-scale oroclinal buckling as an analogue of late Paleozoic to early Mesozoic evolution of Central Asian Orogenic Belt (CAOB) resulting from nortward drift of the North-China craton towards the Siberian craton. Here we studied relationship between lower crustal and lithospheric mantle flows and upper crustal deformation respectively. A second class of experiments is focused to more general study of a lower crustal flow in indentation systems that represent a major component of some large hot orogens (e.g. Bohemian massif). The most of simulations in both cases shows a strong dependency of a brittle structures shape, that are situated in upper crust, on folding style of a middle and lower ductile layers which is influenced by rheological, geometrical and thermal conditions of different parts across shortened domain. The purpose of PIV application is to quantify material redistribution in critical domains of the model. The derivation of flow direction and calculation of strain-rate and total displacement field in analogue experiments is generally difficult and time-expensive or often performed only on a base of visual evaluations. PIV method operates with set of images, where small tracer particles are seeded within modeled domain and are assumed to faithfully follow the material flow. On base of pixel coordinates estimation the material displacement field, velocity field, strain-rate, vorticity, tortuosity etc. are calculated. In our experiments we used velocity field divergence to

  5. Optimal experiment design for model selection in biochemical networks.

    Science.gov (United States)

    Vanlier, Joep; Tiemann, Christian A; Hilbers, Peter A J; van Riel, Natal A W

    2014-02-20

    Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors.

  6. Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?

    Science.gov (United States)

    Hu, Yannan; van Lenthe, Frank J; Hoffmann, Rasmus; van Hedel, Karen; Mackenbach, Johan P

    2017-04-20

    The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE) have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects on population-level health inequalities. Increased use of the

  7. Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?

    Directory of Open Access Journals (Sweden)

    Yannan Hu

    2017-04-01

    Full Text Available Abstract Background The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. Methods We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. Results All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Conclusion Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects

  8. Model and Computing Experiment for Research and Aerosols Usage Management

    Directory of Open Access Journals (Sweden)

    Daler K. Sharipov

    2012-09-01

    Full Text Available The article deals with a math model for research and management of aerosols released into the atmosphere as well as numerical algorithm used as hardware and software systems for conducting computing experiment.

  9. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    Science.gov (United States)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  10. A user experience model for tangible interfaces for children

    NARCIS (Netherlands)

    Reidsma, Dennis; van Dijk, Elisabeth M.A.G.; van der Sluis, Frans; Volpe, G; Camurri, A.; Perloy, L.M.; Nijholt, Antinus

    2015-01-01

    Tangible user interfaces allow children to take advantage of their experience in the real world when interacting with digital information. In this paper we describe a model for tangible user interfaces specifically for children that focuses mainly on the user experience during interaction and on how

  11. New Quantitative Structure-Activity Relationship Model for Angiotensin-Converting Enzyme Inhibitory Dipeptides Based on Integrated Descriptors.

    Science.gov (United States)

    Deng, Baichuan; Ni, Xiaojun; Zhai, Zhenya; Tang, Tianyue; Tan, Chengquan; Yan, Yijing; Deng, Jinping; Yin, Yulong

    2017-11-08

    Angiotensin-converting enzyme (ACE) inhibitory peptides derived from food proteins have been widely reported for hypertension treatment. In this paper, a benchmark data set containing 141 unique ACE inhibitory dipeptides was constructed through database mining, and a quantitative structure-activity relationships (QSAR) study was carried out to predict half-inhibitory concentration (IC 50 ) of ACE activity. Sixteen descriptors were tested and the model generated by G-scale descriptor showed the best predictive performance with the coefficient of determination (R 2 ) and cross-validated R 2 (Q 2 ) of 0.6692 and 0.6220, respectively. For most other descriptors, R 2 were ranging from 0.52 to 0.68 and Q 2 were ranging from 0.48 to 0.61. A complex model combining all 16 descriptors was carried out and variable selection was performed in order to further improve the prediction performance. The quality of model using integrated descriptors (R 2 0.7340 ± 0.0038, Q 2 0.7151 ± 0.0019) was better than that of G-scale. An in-depth study of variable importance showed that the most correlated properties to ACE inhibitory activity were hydrophobicity, steric, and electronic properties and C-terminal amino acids contribute more than N-terminal amino acids. Five novel predicted ACE-inhibitory peptides were synthesized, and their IC 50 values were validated through in vitro experiments. The results indicated that the constructed model could give a reliable prediction of ACE-inhibitory activity of peptides, and it may be useful in the design of novel ACE-inhibitory peptides.

  12. A new perceptual difference model for diagnostically relevant quantitative image quality evaluation: a preliminary study.

    Science.gov (United States)

    Miao, Jun; Huang, Feng; Narayan, Sreenath; Wilson, David L

    2013-05-01

    Most objective image quality metrics average over a wide range of image degradations. However, human clinicians demonstrate bias toward different types of artifacts. Here, we aim to create a perceptual difference model based on Case-PDM that mimics the preference of human observers toward different artifacts. We measured artifact disturbance to observers and calibrated the novel perceptual difference model (PDM). To tune the new model, which we call Artifact-PDM, degradations were synthetically added to three healthy brain MR data sets. Four types of artifacts (noise, blur, aliasing or "oil painting" which shows up as flattened, over-smoothened regions) of standard compressed sensing (CS) reconstruction, within a reasonable range of artifact severity, as measured by both PDM and visual inspection, were considered. After the model parameters were tuned by each synthetic image, we used a functional measurement theory pair-comparison experiment to measure the disturbance of each artifact to human observers and determine the weights of each artifact's PDM score. To validate Artifact-PDM, human ratings obtained from a Double Stimulus Continuous Quality Scale experiment were compared to the model for noise, blur, aliasing, oil painting and overall qualities using a large set of CS-reconstructed MR images of varying quality. Finally, we used this new approach to compare CS to GRAPPA, a parallel MRI reconstruction algorithm. We found that, for the same Artifact-PDM score, the human observer found incoherent aliasing to be the most disturbing and noise the least. Artifact-PDM results were highly correlated to human observers in both experiments. Optimized CS reconstruction quality compared favorably to GRAPPA's for the same sampling ratio. We conclude our novel metric can faithfully represent human observer artifact evaluation and can be useful in evaluating CS and GRAPPA reconstruction algorithms, especially in studying artifact trade-offs. Copyright © 2013 Elsevier Inc

  13. Invasive growth of Saccharomyces cerevisiae depends on environmental triggers: a quantitative model.

    Science.gov (United States)

    Zupan, Jure; Raspor, Peter

    2010-04-01

    In this contribution, the influence of various physicochemical factors on Saccharomyces cerevisiae invasive growth is examined quantitatively. Agar-invasion assays are generally applied for in vitro studies on S. cerevisiae invasiveness, the phenomenon observed as a putative virulence trait in this clinically more and more concerning yeast. However, qualitative agar-invasion assays, used until now, strongly limit the feasibility and interpretation of analyses and therefore needed to be improved. Besides, knowledge in this field concerning the physiology of invasive growth, influenced by stress conditions related to the human alimentary tract and food, is poor and should be expanded. For this purpose, a quantitative agar-invasion assay, presented in our previous work, was applied in this contribution to clarify the significance of the stress factors controlling the adhesion and invasion of the yeast in greater detail. Ten virulent and non-virulent S. cerevisiae strains were assayed at various temperatures, pH values, nutrient starvation, modified atmosphere, and different concentrations of NaCl, CaCl2 and preservatives. With the use of specific parameters, like a relative invasion, eight invasive growth models were hypothesized, which enabled intelligible interpretation of the results. A strong preference for invasive growth (meaning high relative invasion) was observed when the strains were grown on nitrogen- and glucose-depleted media. A significant increase in the invasion of the strains was also determined at temperatures typical for human fever (37-39 degrees C). On the other hand, a strong repressive effect on invasion was found in the presence of salts, anoxia and some preservatives. Copyright 2010 John Wiley & Sons, Ltd.

  14. Fire Response of Loaded Composite Structures - Experiments and Modeling

    OpenAIRE

    Burdette, Jason A.

    2001-01-01

    In this work, the thermo-mechanical response and failure of loaded, fire-exposed composite structures was studied. Unique experimental equipment and procedures were developed and experiments were performed to assess the effects of mechanical loading and fire exposure on the service life of composite beams. A series of analytical models was assembled to describe the fire growth and structural response processes for the system used in the experiments. This series of models consists of a fire...

  15. Engineering teacher training models and experiences

    Science.gov (United States)

    González-Tirados, R. M.

    2009-04-01

    Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.

  16. Effects of early and late adverse experiences on morpho-quantitative characteristics of Sprague-Dawley rat spleen subjected to stress during adulthood.

    Science.gov (United States)

    Vásquez, Bélgica; Sandoval, Cristian; Smith, Ricardo Luiz; del Sol, Mariano

    2015-01-01

    Morpho-quantitative studies of the spleen indicate that the proportions of the compartments and sub-compartments are stable in normal conditions. However, disorders due to stress can influence the number and function of the immune cells in this organ. The aim of this study was to determine, through the model of altering the early mother-infant bond and altering the late social bond through isolation, the effect on the morpho-quantitative characteristics of the spleen in adult Sprague-Dawley rats subjected to intermittent chronic stress in adulthood. Twenty-five newborn female rats were used, kept under the standardized lactation and feeding conditions. The rats were assigned randomly to 2 control groups (C1 and C2) and 3 experimental groups, exposed to early (E1), late (E2) or early-late (E3) adverse experiences and then subjected to intermittent chronic stress in adulthood (C2, E1, E2 and E3). The spleen of each animal was isolated and its morphometric characteristics were determined: volume density (Vv) of the red pulp, white pulp, marginal zone, splenic lymph nodule, periarterial lymphatic sheath and germinal center; areal number density (Na), surface density (Sv), number density (Nv), diameter (D) and total number of splenic lymph nodules. The mass of each compartment was also determined. A one-way analysis of variance (ANOVA) and Scheffé's post hoc test were used for the statistical analysis. The p values were considered significant when they were less than 0.05 (*) and very significant at less than 0.025 (**). There were significant differences in the Vv of the red pulp, white pulp and their sub-compartments between the control and experimental groups. The white pulp increased significantly (P = 0.000) in E1, E2 and E3 compared to C1 and C2. The average Na and D values of the splenic lymph nodules were also higher in the experimental groups. The ANOVA for the mass of the spleen and the red pulp revealed no differences between the groups. The mass of the

  17. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  18. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    Science.gov (United States)

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  19. A probabilistic quantitative risk assessment model for the long-term work zone crashes.

    Science.gov (United States)

    Meng, Qiang; Weng, Jinxian; Qu, Xiaobo

    2010-11-01

    Work zones especially long-term work zones increase traffic conflicts and cause safety problems. Proper casualty risk assessment for a work zone is of importance for both traffic safety engineers and travelers. This paper develops a novel probabilistic quantitative risk assessment (QRA) model to evaluate the casualty risk combining frequency and consequence of all accident scenarios triggered by long-term work zone crashes. The casualty risk is measured by the individual risk and societal risk. The individual risk can be interpreted as the frequency of a driver/passenger being killed or injured, and the societal risk describes the relation between frequency and the number of casualties. The proposed probabilistic QRA model consists of the estimation of work zone crash frequency, an event tree and consequence estimation models. There are seven intermediate events--age (A), crash unit (CU), vehicle type (VT), alcohol (AL), light condition (LC), crash type (CT) and severity (S)--in the event tree. Since the estimated value of probability for some intermediate event may have large uncertainty, the uncertainty can thus be characterized by a random variable. The consequence estimation model takes into account the combination effects of speed and emergency medical service response time (ERT) on the consequence of work zone crash. Finally, a numerical example based on the Southeast Michigan work zone crash data is carried out. The numerical results show that there will be a 62% decrease of individual fatality risk and 44% reduction of individual injury risk if the mean travel speed is slowed down by 20%. In addition, there will be a 5% reduction of individual fatality risk and 0.05% reduction of individual injury risk if ERT is reduced by 20%. In other words, slowing down speed is more effective than reducing ERT in the casualty risk mitigation. 2010 Elsevier Ltd. All rights reserved.

  20. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  1. Currency risk and prices of oil and petroleum products: a simulation with a quantitative model

    International Nuclear Information System (INIS)

    Aniasi, L.; Ottavi, D.; Rubino, E.; Saracino, A.

    1992-01-01

    This paper analyzes the relationship between the exchange rates of the US Dollar against the four major European currencies and the prices of oil and its main products in those countries. In fact, the sensitivity of the prices to the exchange rate movements is of fundamental importance for the refining and distribution industries of importing countries. The result of the analysis shows that in neither free market conditions, as those present in Great Britain, France and Germany, nor in regulated markets, i.e. the italian one, do the variations of petroleum product prices fully absorb the variation of the exchange rates. In order to assess the above relationship, we first tested the order of co-integration of the time series of exchange rates of EMS currencies with those of international prices of oil and its derivative products; then we used a transfer-function model to reproduce the quantitative relationships between those variables. Using these results, we then reproduced domestic price functions with partial adjustment mechanisms. Finally, we used the above model to run a simulation of the deviation from the steady-state pattern caused by exchange-rate exogenous shocks. 21 refs., 5 figs., 3 tabs

  2. An ex vivo model to quantitatively analyze cell migration in tissue.

    Science.gov (United States)

    O'Leary, Conor J; Weston, Mikail; McDermott, Kieran W

    2018-01-01

    Within the developing central nervous system, the ability of cells to migrate throughout the tissue parenchyma to reach their target destination and undergo terminal differentiation is vital to normal central nervous system (CNS) development. To develop novel therapies to treat the injured CNS, it is essential that the migratory behavior of cell populations is understood. Many studies have examined the ability of individual neurons to migrate through the developing CNS, describing specific modes of migration including locomotion and somal translocation. Few studies have investigated the mass migration of large populations of neural progenitors, particularly in the developing the spinal cord. Here, we describe a method to robustly analyze large numbers of migrating cells using a co-culture assay. The ex vivo tissue model promotes the survival and differentiation of co-cultured progenitor cells. Using this assay, we demonstrate that migrating neuroepithelial progenitor cells display region specific migration patterns within the dorsal and ventral spinal cord at defined developmental time points. The technique described here is a viable ex vivo model to quantitatively analyze cell migration and differentiation. We demonstrate the ability to detect changes in cell migration within distinct tissue region across tissue samples using the technique described here. Developmental Dynamics 247:201-211, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    Science.gov (United States)

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.

  4. DEVELOPMENT OF MODEL FOR QUANTITATIVE EVALUATION OF DYNAMICALLY STABLE FORMS OF RIVER CHANNELS

    Directory of Open Access Journals (Sweden)

    O. V. Zenkin

    2017-01-01

    systems. The determination of regularities of development of bed forms and quantitative relations between their parameters are based on modeling the “right” forms of riverbed.The research has resulted in establishing and testing methodology of simulation modeling, which allows one to identify dynamically stable form of riverbed. 

  5. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione

    International Nuclear Information System (INIS)

    Si Hongzong; Wang Tao; Zhang Kejun; Duan Yunbo; Yuan Shuping; Fu Aiping; Hu Zhide

    2007-01-01

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method

  6. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    Science.gov (United States)

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom

  7. A quantitative model of intracellular growth of Legionella pneumophila in Acanthamoeba castellanii.

    Science.gov (United States)

    Moffat, J F; Tompkins, L S

    1992-01-01

    A model of intracellular growth for Legionella pneumophila in Acanthamoeba castellanii has been developed and provides a quantitative measure of survival and replication after entry. In this model, Acanthamoeba monolayers were incubated with bacteria in tissue culture plates under nutrient-limiting conditions. Gentamicin was used to kill extracellular bacteria following the period of incubation, and the number of intracellular bacteria was determined following lysis of amebae. Intracellular growth of virulent L. pneumophila and other wild-type Legionella species was observed when the assay was performed at 37 degrees C. At room temperature, none of the Legionella strains tested grew intracellularly, while an avirulent L. pneumophila strain was unable to replicate in this assay at either temperature. The effect of nutrient limitation on A. castellanii during the assay prevented multiplication of the amebae and increased the level of infection by Legionella spp. The level of infection of the amebae was directly proportional to the multiplicity of infection with bacteria; at an inoculum of 1.03 x 10(7) bacteria added to wells containing 1.10 x 10(5) amebae (multiplicity of infection of 100), approximately 4.4% of A. castellanii cells became infected. Cytochalasin D reduced the uptake of bacteria by the amebae primarily by causing amebae to lift off the culture dish, reducing the number of target hosts; methylamine also reduced the level of initial infection, yet neither inhibitor was able to prevent intracellular replication of Legionella spp. Consequently, once the bacteria entered the cell, only lowered temperature could restrict replication. This model of intracellular growth provides a one-step growth curve and should be useful to study the molecular basis of the host-parasite interaction. PMID:1729191

  8. Quantitative vertebral morphometry based on parametric modeling of vertebral bodies in 3D.

    Science.gov (United States)

    Stern, D; Njagulj, V; Likar, B; Pernuš, F; Vrtovec, T

    2013-04-01

    Quantitative vertebral morphometry (QVM) was performed by parametric modeling of vertebral bodies in three dimensions (3D). Identification of vertebral fractures in two dimensions is a challenging task due to the projective nature of radiographic images and variability in the vertebral shape. By generating detailed 3D anatomical images, computed tomography (CT) enables accurate measurement of vertebral deformations and fractures. A detailed 3D representation of the vertebral body shape is obtained by automatically aligning a parametric 3D model to vertebral bodies in CT images. The parameters of the 3D model describe clinically meaningful morphometric vertebral body features, and QVM in 3D is performed by comparing the parameters to their statistical values. Thresholds and parameters that best discriminate between normal and fractured vertebral bodies are determined by applying statistical classification analysis. The proposed QVM in 3D was applied to 454 normal and 228 fractured vertebral bodies, yielding classification sensitivity of 92.5% at 7.5% specificity, with corresponding accuracy of 92.5% and precision of 86.1%. The 3D shape parameters that provided the best separation between normal and fractured vertebral bodies were the vertebral body height and the inclination and concavity of both vertebral endplates. The described QVM in 3D is able to efficiently and objectively discriminate between normal and fractured vertebral bodies and identify morphological cases (wedge, (bi)concavity, or crush) and grades (1, 2, or 3) of vertebral body fractures. It may be therefore valuable for diagnosing and predicting vertebral fractures in patients who are at risk of osteoporosis.

  9. A rodent model of traumatic stress induces lasting sleep and quantitative electroencephalographic disturbances.

    Science.gov (United States)

    Nedelcovych, Michael T; Gould, Robert W; Zhan, Xiaoyan; Bubser, Michael; Gong, Xuewen; Grannan, Michael; Thompson, Analisa T; Ivarsson, Magnus; Lindsley, Craig W; Conn, P Jeffrey; Jones, Carrie K

    2015-03-18

    Hyperarousal and sleep disturbances are common, debilitating symptoms of post-traumatic stress disorder (PTSD). PTSD patients also exhibit abnormalities in quantitative electroencephalography (qEEG) power spectra during wake as well as rapid eye movement (REM) and non-REM (NREM) sleep. Selective serotonin reuptake inhibitors (SSRIs), the first-line pharmacological treatment for PTSD, provide modest remediation of the hyperarousal symptoms in PTSD patients, but have little to no effect on the sleep-wake architecture deficits. Development of novel therapeutics for these sleep-wake architecture deficits is limited by a lack of relevant animal models. Thus, the present study investigated whether single prolonged stress (SPS), a rodent model of traumatic stress, induces PTSD-like sleep-wake and qEEG spectral power abnormalities that correlate with changes in central serotonin (5-HT) and neuropeptide Y (NPY) signaling in rats. Rats were implanted with telemetric recording devices to continuously measure EEG before and after SPS treatment. A second cohort of rats was used to measure SPS-induced changes in plasma corticosterone, 5-HT utilization, and NPY expression in brain regions that comprise the neural fear circuitry. SPS caused sustained dysregulation of NREM and REM sleep, accompanied by state-dependent alterations in qEEG power spectra indicative of cortical hyperarousal. These changes corresponded with acute induction of the corticosterone receptor co-chaperone FK506-binding protein 51 and delayed reductions in 5-HT utilization and NPY expression in the amygdala. SPS represents a preclinical model of PTSD-related sleep-wake and qEEG disturbances with underlying alterations in neurotransmitter systems known to modulate both sleep-wake architecture and the neural fear circuitry.

  10. Quantitative microleakage analysis of endodontic temporary filling materials using a glucose penetration model.

    Science.gov (United States)

    Kim, Sin-Young; Ahn, Jin-Soo; Yi, Young-Ah; Lee, Yoon; Hwang, Ji-Yun; Seo, Deog-Gyu

    2015-02-01

    The purpose of this study was to analyze the sealing ability of different temporary endodontic materials over a 6-week period using a glucose penetration model. Standardized holes were formed on 48 dentin discs from human premolars. The thicknesses of the specimens were distributed evenly to 2 mm, 3 mm and 4 mm. Prepared dentin specimens were randomly assigned into six groups (n = 7) and the holes in the dentin specimens were filled with two kinds of temporary filling materials as per the manufacturers' instructions as follows: Caviton (GC Corporation, Tokyo, Japan) 2 mm, 3 mm, 4 mm and IRM (Dentsply International Inc., Milford, DE) 2 mm, 3 mm, 4 mm. The remaining specimens were used as positive and negative controls and all specimens underwent thermocycling (1000; 5-55°C). The sealing ability of all samples was evaluated using the leakage model for glucose. The samples were analyzed by a spectrophotometer in quantitative glucose microleakage test over a period of 6 weeks. As a statistical inference, a mixed effect analysis was applied to analyze serial measurements over time. The Caviton groups showed less glucose penetration in comparison with the IRM groups. The Caviton 4 mm group demonstrated relatively low glucose leakage over the test period. High glucose leakage was detected throughout the test period in all IRM groups. The glucose leakage level increased after 1 week in the Caviton 2 mm group and after 4 weeks in the Caviton 3 mm and 4 mm groups (p penetration model during 6 weeks. Temporary filling of Caviton to at least 3 mm in thickness is necessary and temporary filling periods should not exceed 4 weeks.

  11. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  12. A quantitative exposure model simulating human norovirus transmission during preparation of deli sandwiches.

    Science.gov (United States)

    Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke

    2015-03-02

    Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The

  13. Absolute quantitation of myocardial blood flow with {sup 201}Tl and dynamic SPECT in canine: optimisation and validation of kinetic modelling

    Energy Technology Data Exchange (ETDEWEB)

    Iida, Hidehiro; Kim, Kyeong-Min; Nakazawa, Mayumi; Sohlberg, Antti; Zeniya, Tsutomu; Hayashi, Takuya; Watabe, Hiroshi [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita City, Osaka (Japan); Eberl, Stefan [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita City, Osaka (Japan); Royal Prince Alfred Hospital, PET and Nuclear Medicine Department, Camperdown, NSW (Australia); Tamura, Yoshikazu [Akita Kumiai General Hospital, Department of Cardiology, Akita City (Japan); Ono, Yukihiko [Akita Research Institute of Brain, Akita City (Japan)

    2008-05-15

    {sup 201}Tl has been extensively used for myocardial perfusion and viability assessment. Unlike {sup 99m}Tc-labelled agents, such as {sup 99m}Tc-sestamibi and {sup 99m}Tc-tetrofosmine, the regional concentration of {sup 201}Tl varies with time. This study is intended to validate a kinetic modelling approach for in vivo quantitative estimation of regional myocardial blood flow (MBF) and volume of distribution of {sup 201}Tl using dynamic SPECT. Dynamic SPECT was carried out on 20 normal canines after the intravenous administration of {sup 201}Tl using a commercial SPECT system. Seven animals were studied at rest, nine during adenosine infusion, and four after beta-blocker administration. Quantitative images were reconstructed with a previously validated technique, employing OS-EM with attenuation-correction, and transmission-dependent convolution subtraction scatter correction. Measured regional time-activity curves in myocardial segments were fitted to two- and three-compartment models. Regional MBF was defined as the influx rate constant (K{sub 1}) with corrections for the partial volume effect, haematocrit and limited first-pass extraction fraction, and was compared with that determined from radio-labelled microspheres experiments. Regional time-activity curves responded well to pharmacological stress. Quantitative MBF values were higher with adenosine and decreased after beta-blocker compared to a resting condition. MBFs obtained with SPECT (MBF{sub SPECT}) correlated well with the MBF values obtained by the radio-labelled microspheres (MBF{sub MS}) (MBF{sub SPECT} = -0.067 + 1.042 x MBF{sub MS}, p < 0.001). The three-compartment model provided better fit than the two-compartment model, but the difference in MBF values between the two methods was small and could be accounted for with a simple linear regression. Absolute quantitation of regional MBF, for a wide physiological flow range, appears to be feasible using {sup 201}Tl and dynamic SPECT. (orig.)

  14. Spectral evaluation of Earth geopotential models and an experiment ...

    Indian Academy of Sciences (India)

    and an experiment on its regional improvement for geoid modelling. B Erol. Department of Geomatics Engineering, Civil Engineering Faculty,. Istanbul Technical University, Maslak 34469, Istanbul, Turkey. e-mail: bihter@itu.edu.tr. As the number of Earth geopotential models (EGM) grows with the increasing number of data ...

  15. Historical and idealized climate model experiments: an EMIC intercomparison

    DEFF Research Database (Denmark)

    Eby, M.; Weaver, A. J.; Alexander, K.

    2012-01-01

    Both historical and idealized climate model experiments are performed with a variety of Earth System Models of Intermediate Complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and c...

  16. Model experiments related to outdoor propagation over an earth berm

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1994-01-01

    A series of scale model experiments related to outdoor propagation over an earth berm is described. The measurements are performed with a triggered spark source. The results are compared with data from an existing calculation model based upon uniform diffraction theory. Comparisons are made...

  17. Teaching Structures with Models : Experiences from Chile and the Netherlands

    NARCIS (Netherlands)

    Morales Beltran, M.G.; Borgart, A.

    2012-01-01

    This paper states the importance of using scaled models for the teaching of structures in the curricula of Architecture and Structural Engineering studies. Based on 10 years’ experience working with models for different purposes, with a variety of materials and constructions methods, the authors

  18. Human strategic reasoning in dynamic games: Experiments, logics, cognitive models

    NARCIS (Netherlands)

    Ghosh, Sujata; Halder, Tamoghna; Sharma, Khyati; Verbrugge, Rineke

    2015-01-01

    © Springer-Verlag Berlin Heidelberg 2015.This article provides a three-way interaction between experiments, logic and cognitive modelling so as to bring out a shared perspective among these diverse areas, aiming towards better understanding and better modelling of human strategic reasoning in

  19. Effect of platform, reference material, and quantification model on enumeration of Enterococcus by quantitative PCR methods

    Science.gov (United States)

    Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an optionn for recreational water quality testi...

  20. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    Science.gov (United States)

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  1. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models

    International Nuclear Information System (INIS)

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-01-01

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds.

  2. Quantitative assessment of bone defect healing by multidetector CT in a pig model

    International Nuclear Information System (INIS)

    Riegger, Carolin; Kroepil, Patric; Lanzman, Rotem S.; Miese, Falk R.; Antoch, Gerald; Scherer, Axel; Jungbluth, Pascal; Hakimi, Mohssen; Wild, Michael; Hakimi, Ahmad R.

    2012-01-01

    To evaluate multidetector CT volumetry in the assessment of bone defect healing in comparison to histopathological findings in an animal model. In 16 mini-pigs, a circumscribed tibial bone defect was created. Multidetector CT (MDCT) of the tibia was performed on a 64-row scanner 42 days after the operation. The extent of bone healing was estimated quantitatively by MDCT volumetry using a commercially available software programme (syngo Volume, Siemens, Germany).The volume of the entire defect (including all pixels from -100 to 3,000 HU), the nonconsolidated areas (-100 to 500 HU), and areas of osseous consolidation (500 to 3,000 HU) were assessed and the extent of consolidation was calculated. Histomorphometry served as the reference standard. The extent of osseous consolidation in MDCT volumetry ranged from 19 to 92% (mean 65.4 ± 18.5%). There was a significant correlation between histologically visible newly formed bone and the extent of osseous consolidation on MDCT volumetry (r = 0.82, P < 0.0001). A significant negative correlation was detected between osseous consolidation on MDCT and histological areas of persisting defect (r = -0.9, P < 0.0001). MDCT volumetry is a promising tool for noninvasive monitoring of bone healing, showing excellent correlation with histomorphometry. (orig.)

  3. Advances in the molecular modeling and quantitative structure-activity relationship-based design for antihistamines.

    Science.gov (United States)

    Galvez, Jorge; Galvez-Llompart, Maria; Zanni, Riccardo; Garcia-Domenech, Ramon

    2013-03-01

    Nowadays the use of antihistamines (AH) is increasing steadily. These drugs are able to act on a variety of pathological conditions of the organism. A number of computer-aided (in silico) approaches have been developed to discover and develop novel AH drugs. Among these methods stand the ones based on drug-receptor docking, thermodynamics, as well as the quantitative structure-activity relationships (QSAR). This review collates the most recent advances in the use of computer approaches for the search and characterization of novel AH drugs. Within the QSAR methods, particular attention will be paid to those based on molecular topology (MT) because of their demonstrated efficacy in discovering new drugs. Collateral topics will also be dealt with including: docking studies, thermodynamic aspects, molecular modeling and so on. These issues will be treated to the extent that they have interest as complementary to QSAR-MT. Given the importance of the use of AHs, the search for new drugs in this field has become imperative today. In this regard, the use of QSAR methods based on MT, namely QSAR-MT, has proven to be a powerful tool when the goal is discovering new hit or lead structures. It has been shown that antihistaminic activity is complex and different for the four known types of receptors (H1 to H4) and that electronic, steric and physicochemical issues determine drug activity. These factors, along with the purely structural ones, can be deduced from topological and topochemical information.

  4. Quantitative assessment of bone defect healing by multidetector CT in a pig model

    Energy Technology Data Exchange (ETDEWEB)

    Riegger, Carolin; Kroepil, Patric; Lanzman, Rotem S.; Miese, Falk R.; Antoch, Gerald; Scherer, Axel [University Duesseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, Duesseldorf (Germany); Jungbluth, Pascal; Hakimi, Mohssen; Wild, Michael [University Duesseldorf, Medical Faculty, Department of Traumatology and Hand Surgery, Duesseldorf (Germany); Hakimi, Ahmad R. [Universtity Duesseldorf, Medical Faculty, Department of Oral Surgery, Duesseldorf (Germany)

    2012-05-15

    To evaluate multidetector CT volumetry in the assessment of bone defect healing in comparison to histopathological findings in an animal model. In 16 mini-pigs, a circumscribed tibial bone defect was created. Multidetector CT (MDCT) of the tibia was performed on a 64-row scanner 42 days after the operation. The extent of bone healing was estimated quantitatively by MDCT volumetry using a commercially available software programme (syngo Volume, Siemens, Germany).The volume of the entire defect (including all pixels from -100 to 3,000 HU), the nonconsolidated areas (-100 to 500 HU), and areas of osseous consolidation (500 to 3,000 HU) were assessed and the extent of consolidation was calculated. Histomorphometry served as the reference standard. The extent of osseous consolidation in MDCT volumetry ranged from 19 to 92% (mean 65.4 {+-} 18.5%). There was a significant correlation between histologically visible newly formed bone and the extent of osseous consolidation on MDCT volumetry (r = 0.82, P < 0.0001). A significant negative correlation was detected between osseous consolidation on MDCT and histological areas of persisting defect (r = -0.9, P < 0.0001). MDCT volumetry is a promising tool for noninvasive monitoring of bone healing, showing excellent correlation with histomorphometry. (orig.)

  5. Quantitative profiling of brain lipid raft proteome in a mouse model of fragile X syndrome.

    Science.gov (United States)

    Kalinowska, Magdalena; Castillo, Catherine; Francesconi, Anna

    2015-01-01

    Fragile X Syndrome, a leading cause of inherited intellectual disability and autism, arises from transcriptional silencing of the FMR1 gene encoding an RNA-binding protein, Fragile X Mental Retardation Protein (FMRP). FMRP can regulate the expression of approximately 4% of brain transcripts through its role in regulation of mRNA transport, stability and translation, thus providing a molecular rationale for its potential pleiotropic effects on neuronal and brain circuitry function. Several intracellular signaling pathways are dysregulated in the absence of FMRP suggesting that cellular deficits may be broad and could result in homeostatic changes. Lipid rafts are specialized regions of the plasma membrane, enriched in cholesterol and glycosphingolipids, involved in regulation of intracellular signaling. Among transcripts targeted by FMRP, a subset encodes proteins involved in lipid biosynthesis and homeostasis, dysregulation of which could affect the integrity and function of lipid rafts. Using a quantitative mass spectrometry-based approach we analyzed the lipid raft proteome of Fmr1 knockout mice, an animal model of Fragile X syndrome, and identified candidate proteins that are differentially represented in Fmr1 knockout mice lipid rafts. Furthermore, network analysis of these candidate proteins reveals connectivity between them and predicts functional connectivity with genes encoding components of myelin sheath, axonal processes and growth cones. Our findings provide insight to aid identification of molecular and cellular dysfunctions arising from Fmr1 silencing and for uncovering shared pathologies between Fragile X syndrome and other autism spectrum disorders.

  6. Evaluation of tongue motor biomechanics during swallowing—From oral feeding models to quantitative sensing methods

    Directory of Open Access Journals (Sweden)

    Takahiro Ono

    2009-09-01

    Full Text Available In today's aging society, dentists are more likely to treat patients with dysphagia and are required to select an optimal treatment option based on a complete understanding of the swallowing function. Although the tongue plays an important role in mastication and swallowing as described in human oral feeding models developed in 1990s, physiological significances of tongue function has been poorly understood due to the difficulty in monitoring and analyzing it. This review summarizes recent approaches used to evaluate tongue function during swallowing quantitatively mainly focusing on modern sensing methods such as manofluorography, sensing probes, pressure sensors installed in the palatal plates and ultrasound imaging of tongue movement. Basic understanding on the kinematics and biomechanics of tongue movement during swallowing in normal subjects was provided by the series of studies. There have been few studies, however, on the pathological change of tongue function in dysphagic patients. Therefore further improvement in measurement devices and technologies and additional multidisciplinary studies are needed to establish therapeutic evidence regarding tongue movement, as well as the best prosthodontic approach for dysphagia rehabilitation.

  7. Quantitative studies of animal colour constancy: using the chicken as model

    Science.gov (United States)

    2016-01-01

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations. PMID:27170714

  8. Synthesis, photodynamic activity, and quantitative structure-activity relationship modelling of a series of BODIPYs.

    Science.gov (United States)

    Caruso, Enrico; Gariboldi, Marzia; Sangion, Alessandro; Gramatica, Paola; Banfi, Stefano

    2017-02-01

    Here we report the synthesis of eleven new BODIPYs (14-24) characterized by the presence of an aromatic ring on the 8 (meso) position and of iodine atoms on the pyrrolic 2,6 positions. These molecules, together with twelve BODIPYs already reported by us (1-12), represent a large panel of BODIPYs showing different atoms or groups as substituent of the aromatic moiety. Two physico-chemical features ( 1 O 2 generation rate and lipophilicity), which can play a fundamental role in the outcome as photosensitizers, have been studied. The in vitro photo-induced cell-killing efficacy of 23 PSs was studied on the SKOV3 cell line treating the cells for 24h in the dark then irradiating for 2h with a green LED device (fluence 25.2J/cm 2 ). The cell-killing efficacy was assessed with the MTT test and compared with that one of meso un-substituted compound (13). In order to understand the possible effect of the substituents, a predictive quantitative structure-activity relationship (QSAR) regression model, based on theoretical holistic molecular descriptors, was developed. The results clearly indicate that the presence of an aromatic ring is fundamental for an excellent photodynamic response, whereas the electronic effects and the position of the substituents on the aromatic ring do not influence the photodynamic efficacy. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Quantitative models of persistence and relapse from the perspective of behavioral momentum theory: Fits and misfits.

    Science.gov (United States)

    Nevin, John A; Craig, Andrew R; Cunningham, Paul J; Podlesnik, Christopher A; Shahan, Timothy A; Sweeney, Mary M

    2017-08-01

    We review quantitative accounts of behavioral momentum theory (BMT), its application to clinical treatment, and its extension to post-intervention relapse of target behavior. We suggest that its extension can account for relapse using reinstatement and renewal models, but that its application to resurgence is flawed both conceptually and in its failure to account for recent data. We propose that the enhanced persistence of target behavior engendered by alternative reinforcers is limited to their concurrent availability within a distinctive stimulus context. However, a failure to find effects of stimulus-correlated reinforcer rates in a Pavlovian-to-Instrumental Transfer (PIT) paradigm challenges even a straightforward Pavlovian account of alternative reinforcer effects. BMT has been valuable in understanding basic research findings and in guiding clinical applications and accounting for their data, but alternatives are needed that can account more effectively for resurgence while encompassing basic data on resistance to change as well as other forms of relapse. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Quantitative three-dimensional modeling of zeotile through discrete electron tomography.

    Science.gov (United States)

    Bals, Sara; Batenburg, K Joost; Liang, Duoduo; Lebedev, Oleg; Van Tendeloo, Gustaaf; Aerts, Alexander; Martens, Johan A; Kirschhock, Christine E A

    2009-04-08

    Discrete electron tomography is a new approach for three-dimensional reconstruction of nanoscale objects. The technique exploits prior knowledge of the object to be reconstructed, which results in an improvement of the quality of the reconstructions. Through the combination of conventional transmission electron microscopy and discrete electron tomography with a model-based approach, quantitative structure determination becomes possible. In the present work, this approach is used to unravel the building scheme of Zeotile-4, a silica material with two levels of structural order. The layer sequence of slab-shaped building units could be identified. Successive layers were found to be related by a rotation of 120 degrees, resulting in a hexagonal space group. The Zeotile-4 material is a demonstration of the concept of successive structuring of silica at two levels. At the first level, the colloid chemical properties of Silicalite-1 precursors are exploited to create building units with a slablike geometry. At the second level, the slablike units are tiled using a triblock copolymer to serve as a mesoscale structuring agent.

  11. Measurement Error in Designed Experiments for Second Order Models

    OpenAIRE

    McMahan, Angela Renee

    1997-01-01

    Measurement error (ME) in the factor levels of designed experiments is often overlooked in the planning and analysis of experimental designs. A familiar model for this type of ME, called the Berkson error model, is discussed at length. Previous research has examined the effect of Berkson error on two-level factorial and fractional factorial designs. This dissertation extends the examination to designs for second order models. The results are used to suggest ...

  12. Fechner’s law in metacognition: a quantitative model of visual working memory confidence

    Science.gov (United States)

    van den Berg, Ronald; Yoo, Aspen H.; Ma, Wei Ji

    2016-01-01

    Although visual working memory (VWM) has been studied extensively, it is unknown how people form confidence judgments about their memories. Peirce (1878) speculated that Fechner’s law – which states that sensation is proportional to the logarithm of stimulus intensity – might apply to confidence reports. Based on this idea, we hypothesize that humans map the precision of their VWM contents to a confidence rating through Fechner’s law. We incorporate this hypothesis into the best available model of VWM encoding and fit it to data from a delayed-estimation experiment. The model provides an excellent account of human confidence rating distributions as well as the relation between performance and confidence. Moreover, the best-fitting mapping in a model with a highly flexible mapping closely resembles the logarithmic mapping, suggesting that no alternative mapping exists that accounts better for the data than Fechner's law. We propose a neural implementation of the model and find that this model also fits the behavioral data well. Furthermore, we find that jointly fitting memory errors and confidence ratings boosts the power to distinguish previously proposed VWM encoding models by a factor of 5.99 compared to fitting only memory errors. Finally, we show that Fechner's law also accounts for metacognitive judgments in a word recognition memory task, which is a first indication that it may be a general law in metacognition. Our work presents the first model to jointly account for errors and confidence ratings in VWM and could lay the groundwork for understanding the computational mechanisms of metacognition. PMID:28221087

  13. Fechner's law in metacognition: A quantitative model of visual working memory confidence.

    Science.gov (United States)

    van den Berg, Ronald; Yoo, Aspen H; Ma, Wei Ji

    2017-03-01

    Although visual working memory (VWM) has been studied extensively, it is unknown how people form confidence judgments about their memories. Peirce (1878) speculated that Fechner's law-which states that sensation is proportional to the logarithm of stimulus intensity-might apply to confidence reports. Based on this idea, we hypothesize that humans map the precision of their VWM contents to a confidence rating through Fechner's law. We incorporate this hypothesis into the best available model of VWM encoding and fit it to data from a delayed-estimation experiment. The model provides an excellent account of human confidence rating distributions as well as the relation between performance and confidence. Moreover, the best-fitting mapping in a model with a highly flexible mapping closely resembles the logarithmic mapping, suggesting that no alternative mapping exists that accounts better for the data than Fechner's law. We propose a neural implementation of the model and find that this model also fits the behavioral data well. Furthermore, we find that jointly fitting memory errors and confidence ratings boosts the power to distinguish previously proposed VWM encoding models by a factor of 5.99 compared to fitting only memory errors. Finally, we show that Fechner's law also accounts for metacognitive judgments in a word recognition memory task, which is a first indication that it may be a general law in metacognition. Our work presents the first model to jointly account for errors and confidence ratings in VWM and could lay the groundwork for understanding the computational mechanisms of metacognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Quantitative Validation of the Integrated Medical Model (IMM) for ISS Missions

    Science.gov (United States)

    Young, Millennia; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    Lifetime Surveillance of Astronaut Health (LSAH) provided observed medical event data on 33 ISS and 111 STS person-missions for use in further improving and validating the Integrated Medical Model (IMM). Using only the crew characteristics from these observed missions, the newest development version, IMM v4.0, will simulate these missions to predict medical events and outcomes. Comparing IMM predictions to the actual observed medical event counts will provide external validation and identify areas of possible improvement. In an effort to improve the power of detecting differences in this validation study, the total over each program ISS and STS will serve as the main quantitative comparison objective, specifically the following parameters: total medical events (TME), probability of loss of crew life (LOCL), and probability of evacuation (EVAC). Scatter plots of observed versus median predicted TMEs (with error bars reflecting the simulation intervals) will graphically display comparisons while linear regression will serve as the statistical test of agreement. Two scatter plots will be analyzed 1) where each point reflects a mission and 2) where each point reflects a condition-specific total number of occurrences. The coefficient of determination (R2) resulting from a linear regression with no intercept bias (intercept fixed at zero) will serve as an overall metric of agreement between IMM and the real world system (RWS). In an effort to identify as many possible discrepancies as possible for further inspection, the -level for all statistical tests comparing IMM predictions to observed data will be set to 0.1. This less stringent criterion, along with the multiple testing being conducted, should detect all perceived differences including many false positive signals resulting from random variation. The results of these analyses will reveal areas of the model requiring adjustment to improve overall IMM output, which will thereby provide better decision support for

  15. MODIS volcanic ash retrievals vs FALL3D transport model: a quantitative comparison

    Science.gov (United States)

    Corradini, S.; Merucci, L.; Folch, A.

    2010-12-01

    Satellite retrievals and transport models represents the key tools to monitor the volcanic clouds evolution. Because of the harming effects of fine ash particles on aircrafts, the real-time tracking and forecasting of volcanic clouds is key for aviation safety. Together with the security reasons also the economical consequences of a disruption of airports must be taken into account. The airport closures due to the recent Icelandic Eyjafjöll eruption caused millions of passengers to be stranded not only in Europe, but across the world. IATA (the International Air Transport Association) estimates that the worldwide airline industry has lost a total of about 2.5 billion of Euro during the disruption. Both security and economical issues require reliable and robust ash cloud retrievals and trajectory forecasting. The intercomparison between remote sensing and modeling is required to assure precise and reliable volcanic ash products. In this work we perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands operating in the VIS-TIR spectral range and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 micron have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. Three MODIS images collected the October 28, 29 and 30 on Mt. Etna volcano during the 2002 eruption have been considered as test cases. The results show a general good agreement between the retrieved and the modeled volcanic clouds in the first 300 km from the vents. Even if the

  16. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors.

    Directory of Open Access Journals (Sweden)

    Igor Shuryak

    Full Text Available Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1 bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA; (2 fungi isolated from the Chernobyl nuclear-power plant (Ukraine buildings after the accident; (3 yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s among biologically-plausible alternatives. Our analysis suggests the following: (1 Both radionuclides and co-occurring chemical contaminants (e.g. NO2 are important for explaining microbial responses to radioactive contamination. (2 Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3 The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4 Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1 the most severe effects (e.g. extinction on microbial populations may occur when unfavorable environmental

  17. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  18. Parasite to patient: A quantitative risk model for Trichinella spp. in pork and wild boar meat.

    Science.gov (United States)

    Franssen, Frits; Swart, Arno; van der Giessen, Joke; Havelaar, Arie; Takumi, Katsuhisa

    2017-01-16

    Consumption of raw or inadequately cooked pork meat may result in trichinellosis, a human disease due to nematodes of the genus Trichinella. In many countries worldwide, individual control of pig carcasses at meat inspection is mandatory but incurs high costs in relation to absence of positive carcasses from pigs reared under controlled housing. EU regulation 2015/1375 implements an alternative risk-based approach, in view of absence of positive findings in pigs under controlled housing conditions. Moreover, Codex Alimentarius guidelines for the control of Trichinella spp. in meat of suidae have been published (CAC, 2015) and used in conjunction with the OIE terrestrial Animal health code, to provide guidance to governments and industry on risk based control measures to prevent human exposure to Trichinella spp. and to facilitate international pork trade. To further support such a risk-based approach, we model the risk of human trichinellosis due to consumption of meat from infected pigs, raised under non-controlled housing and wild boar, using Quantitative Microbial Risk Assessment (QMRA) methods. Our model quantifies the distribution of Trichinella muscle larve (ML) in swine, test sensitivity at carcass control, partitioning of edible pork parts, Trichinella ML distribution in edible muscle types, heat inactivation by cooking and portion sizes. The resulting exposure estimate is combined with a dose response model for Trichinella species to estimate the incidence of human illness after consumption of infected meat. Paramater estimation is based on experimental and observational datasets. In Poland, which served as example, we estimated an average incidence of 0.90 (95%CI: 0.00-3.68) trichinellosis cases per million persons per year (Mpy) due to consumption of pork from pigs that were reared under non-controlled housing, and 1.97 (95%CI: 0.82-4.00) cases per Mpy due to consumption of wild boar. The total estimated incidence of human trichinellosis attributed to

  19. Quantitative modeling of electron spectroscopy intensities for supported nanoparticles: The hemispherical cap model for non-normal detection

    Science.gov (United States)

    Sharp, James C.; Campbell, Charles T.

    2015-02-01

    Nanoparticles of one element or compound dispersed across the surface of another substrate element or compound form the basis for many materials of great technological importance, such as heterogeneous catalysts, fuel cells and other electrocatalysts, photocatalysts, chemical sensors and biomaterials. They also form during film growth by deposition in many fabrication processes. The average size and number density of such nanoparticles are often very important, and these can be estimated with electron microscopy or scanning tunneling microscopy. However, this is very time consuming and often unavailable with sufficient resolution when the particle size is ~ 1 nm. Because the probe depth of electron spectroscopies like X-Ray Photoelectron Spectroscopy (XPS) or Auger Electron Spectroscopy (AES) is ~ 1 nm, these provide quantitative information on both the total amount of adsorbed material when it is in the form of such small nanoparticles, and the particle thickness. For electron spectroscopy conducted with electron detection normal to the surface, Diebold et al. (1993) derived analytical relationships between the signal intensities for the adsorbate and substrate and the particles' average size and number density, under the assumption that all the particles have hemispherical shape and the same radius. In this paper, we report a simple angle- and particle-size-dependent correction factor that can be applied to these analytical expressions so that they can also be extended to measurements made at other detection angles away from the surface normal. This correction factor is computed using numerical integration and presented for use in future modeling. This correction factor is large (> 2) for angles beyond 60°, so comparing model predictions to measurements at both 0° and ≥ 60° will also provide a new means for testing the model's assumptions (hemispherical shape and fixed size particles). The ability to compare the hemispherical cap model at several angles

  20. Using ISOS consensus test protocols for development of quantitative life test models in ageing of organic solar cells

    DEFF Research Database (Denmark)

    Kettle, J.; Stoichkov, V.; Kumar, D.

    2017-01-01

    As Organic Photovoltaic (OPV) development matures, the demand grows for rapid characterisation of degradation and application of Quantitative Accelerated Life Tests (QALT) models to predict and improve reliability. To date, most accelerated testing on OPVs has been conducted using ISOS consensus ...

  1. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    Science.gov (United States)

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  2. Physically based dynamic run-out modelling for quantitative debris flow risk assessment: a case study in Tresenda, northern Italy

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.

    2014-01-01

    Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014

  3. A Quantitative Study of Faculty Perceptions and Attitudes on Asynchronous Virtual Teamwork Using the Technology Acceptance Model

    Science.gov (United States)

    Wolusky, G. Anthony

    2016-01-01

    This quantitative study used a web-based questionnaire to assess the attitudes and perceptions of online and hybrid faculty towards student-centered asynchronous virtual teamwork (AVT) using the technology acceptance model (TAM) of Davis (1989). AVT is online student participation in a team approach to problem-solving culminating in a written…

  4. Quantitative acid-base physiology using the Stewart model. Does it improve our understanding of what is really wrong?

    NARCIS (Netherlands)

    Derksen, R.; Scheffer, G.J.; Hoeven, J.G. van der

    2006-01-01

    Traditional theories of acid-base balance are based on the Henderson-Hasselbalch equation to calculate proton concentration. The recent revival of quantitative acid-base physiology using the Stewart model has increased our understanding of complicated acid-base disorders, but has also led to several

  5. Model development for quantitative evaluation of nuclear fuel cycle alternatives and its application

    International Nuclear Information System (INIS)

    Ko, Won Il

    2000-02-01

    This study addresses the quantitative evaluation of the proliferation resistance and the economics which are important factors of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles, and a fuel cycle cost analysis model was suggested to incorporate various uncertainties in the fuel cycle cost calculation. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. In this model, the proliferation resistance was described an a relative size of the barrier that must be overcome in order to acquire nuclear weapons. Therefore, a larger barriers means that the risk of failure is great, expenditure of resources is large and the time scales for implementation is long. The electromotive force was expressed as the political motivation of the potential proliferators, such as an unauthorized party or a national group to acquire nuclear weapons. The electrical current was then defined as a proliferation resistance index. There are two electrical circuit models used in the evaluation of the proliferation resistance: the series and the parallel circuits. In the series circuit model of the proliferation resistance, a potential proliferator has to overcome all resistance barriers to achieve the manufacturing of the nuclear weapons. This phenomenon could be explained by the fact that the IAEA(International Atomic Energy Agency)'s safeguards philosophy relies on the defense-in-depth principle against nuclear proliferation at a specific facility. The parallel circuit model was also used to imitate the risk of proliferation for

  6. Quantitative analysis of impact-induced seismic signals by numerical modeling

    Science.gov (United States)

    Güldemeister, Nicole; Wünnemann, Kai

    2017-11-01

    We quantify the seismicity of impact events using a combined numerical and experimental approach. The objectives of this work are (1) the calibration of the numerical model by utilizing real-time measurements of the elastic wave velocity and pressure amplitudes in laboratory impact experiments; (2) the determination of seismic parameters, such as quality factor Q and seismic efficiency k, for materials of different porosity and water saturation by a systematic parameter study employing the calibrated numerical model. By means of "numerical experiments" we found that the seismic efficiency k decreases slightly with porosity from k = 3.4 × 10-3 for nonporous quartzite to k = 2.6 × 10-3 for 25% porous sandstone. If pores are completely or partly filled with water, we determined a seismic efficiency of k = 8.2 × 10-5, which is approximately two orders of magnitude lower than in the nonporous case. By measuring the attenuation of the seismic wave with distance in our numerical experiments we determined the seismic quality factor Q to range between ∼35 for the solid quartzite and 80 for the porous dry targets. For water saturated target materials, Q is much lower, <10. The obtained values are in the range of literature values. Translating the seismic efficiency into seismic magnitudes we show that the seismic magnitude of an impact event is about one order of magnitude smaller considering a water saturated target in comparison to a solid or porous target. Obtained seismic magnitudes decrease linearly with distance to the point of impact and are consistent with empirical data for distances closer to the point of impact. The seismic magnitude decreases more rapidly with distance for a water saturated material compared to a dry material.

  7. Cognitive Modeling of Video Game Player User Experience

    Science.gov (United States)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  8. SSC superconducting dipole magnet cryostat model style B construction experience

    International Nuclear Information System (INIS)

    Engler, N.H.; Bossert, R.C.; Carson, J.A.; Gonczy, J.D.; Larson, E.T.; Nicol, T.H.; Niemann, R.C.; Sorensen, D.; Zink, R.

    1989-03-01

    A program to upgrade the full scale SSC dipole magnet cryostat model function and assembly methods has resulted in a series of dipole magnets designated as style B construction. New design features and assembly techniques have produced a magnet and cryostat assembly that is the basis for Phase 1 of the SSC dipole magnet industrialization program. Details of the assembly program, assembly experience, and comparison to previous assembly experiences are presented. Improvements in magnet assembly techniques are also evaluated. 6 refs., 5 figs

  9. Quantitative analysis of porcine reproductive and respiratory syndrome (PRRS) viremia profiles from experimental infection: a statistical modelling approach.

    Science.gov (United States)

    Islam, Zeenath U; Bishop, Stephen C; Savill, Nicholas J; Rowland, Raymond R R; Lunney, Joan K; Trible, Benjamin; Doeschl-Wilson, Andrea B

    2013-01-01

    Porcine reproductive and respiratory syndrome (PRRS) is one of the most economically significant viral diseases facing the global swine industry. Viremia profiles of PRRS virus challenged pigs reflect the severity and progression of infection within the host and provide crucial information for subsequent control measures. In this study we analyse the largest longitudinal PRRS viremia dataset from an in-vivo experiment. The primary objective was to provide a suitable mathematical description of all viremia profiles with biologically meaningful parameters for quantitative analysis of profile characteristics. The Wood's function, a gamma-type function, and a biphasic extended Wood's function were fit to the individual profiles using Bayesian inference with a likelihood framework. Using maximum likelihood inference and numerous fit criteria, we established that the broad spectrum of viremia trends could be adequately represented by either uni- or biphasic Wood's functions. Three viremic categories emerged: cleared (uni-modal and below detection within 42 days post infection(dpi)), persistent (transient experimental persistence over 42 dpi) and rebound (biphasic within 42 dpi). The convenient biological interpretation of the model parameters estimates, allowed us not only to quantify inter-host variation, but also to establish common viremia curve characteristics and their predictability. Statistical analysis of the profile characteristics revealed that persistent profiles were distinguishable already within the first 21 dpi, whereas it is not possible to predict the onset of viremia rebound. Analysis of the neutralizing antibody(nAb) data indicated that there was a ubiquitous strong response to the homologous PRRSV challenge, but high variability in the range of cross-protection of the nAbs. Persistent pigs were found to have a significantly higher nAb cross-protectivity than pigs that either cleared viremia or experienced rebound within 42 dpi. Our study provides

  10. Modeling a ponded infiltration experiment at Yucca Mountain, NV

    International Nuclear Information System (INIS)

    Hudson, D.B.; Guertal, W.R.; Flint, A.L.

    1994-01-01

    Yucca Mountain, Nevada is being evaluated as a potential site for a geologic repository for high level radioactive waste. As part of the site characterization activities at Yucca Mountain, a field-scale ponded infiltration experiment was done to help characterize the hydraulic and infiltration properties of a layered dessert alluvium deposit. Calcium carbonate accumulation and cementation, heterogeneous layered profiles, high evapotranspiration, low precipitation, and rocky soil make the surface difficult to characterize.The effects of the strong morphological horizonation on the infiltration processes, the suitability of measured hydraulic properties, and the usefulness of ponded infiltration experiments in site characterization work were of interest. One-dimensional and two-dimensional radial flow numerical models were used to help interpret the results of the ponding experiment. The objective of this study was to evaluate the results of a ponded infiltration experiment done around borehole UE25 UZN number-sign 85 (N85) at Yucca Mountain, NV. The effects of morphological horizons on the infiltration processes, lateral flow, and measured soil hydaulic properties were studied. The evaluation was done by numerically modeling the results of a field ponded infiltration experiment. A comparison the experimental results and the modeled results was used to qualitatively indicate the degree to which infiltration processes and the hydaulic properties are understood. Results of the field characterization, soil characterization, borehole geophysics, and the ponding experiment are presented in a companion paper

  11. Neutral null models for diversity in serial transfer evolution experiments.

    Science.gov (United States)

    Harpak, Arbel; Sella, Guy

    2014-09-01

    Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  12. Detecting physics beyond the Standard Model with the REDTOP experiment

    Science.gov (United States)

    González, D.; León, D.; Fabela, B.; Pedraza, M. I.

    2017-10-01

    REDTOP is an experiment at its proposal stage. It belongs to the High Intensity class of experiments. REDTOP will use a 1.8 GeV continuous proton beam impinging on a fixed target. It is expected to produce about 1013 η mesons per year. The main goal of REDTOP is to look for physics beyond the Standard Model by detecting rare η decays. The detector is designed with innovative technologies based on the detection of prompt Cherenkov light, such that interesting events can be observed and the background events are efficiently rejected. The experimental design, the physics program and the running plan of the experiment is presented.

  13. Identities and Transformational Experiences for Quantitative Problem Solving: Gender Comparisons of First-Year University Science Students

    Science.gov (United States)

    Hudson, Peter; Matthews, Kelly

    2012-01-01

    Women are underrepresented in science, technology, engineering and mathematics (STEM) areas in university settings; however this may be the result of attitude rather than aptitude. There is widespread agreement that quantitative problem-solving is essential for graduate competence and preparedness in science and other STEM subjects. The research…

  14. qHNMR Analysis of Purity of Common Organic Solvents--An Undergraduate Quantitative Analysis Laboratory Experiment

    Science.gov (United States)

    Bell, Peter T.; Whaley, W. Lance; Tochterman, Alyssa D.; Mueller, Karl S.; Schultz, Linda D.

    2017-01-01

    NMR spectroscopy is currently a premier technique for structural elucidation of organic molecules. Quantitative NMR (qNMR) methodology has developed more slowly but is now widely accepted, especially in the areas of natural product and medicinal chemistry. However, many undergraduate students are not routinely exposed to this important concept.…

  15. A bibliography of terrain modeling (geomorphometry), the quantitative representation of topography: supplement 4.0

    Science.gov (United States)

    Pike, Richard J.

    2002-01-01

    Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement

  16. Statistical Modeling Approach to Quantitative Analysis of Interobserver Variability in Breast Contouring

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jinzhong, E-mail: jyang4@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhang, Lifei; Balter, Peter; Court, Laurence E. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, X. Allen [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Dong, Lei [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Scripps Proton Therapy Center, San Diego, California (United States)

    2014-05-01

    Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively

  17. Radiative and non-radiative recombinations in tensile strained Ge microstrips: Photoluminescence experiments and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Virgilio, M., E-mail: virgilio@df.unipi.it [Dip. di Fisica “E. Fermi,” Università di Pisa, Largo Pontecorvo 3, 56127 Pisa (Italy); NEST, Istituto Nanoscienze-CNR, P.za San Silvestro 12, 56127 Pisa (Italy); Schroeder, T.; Yamamoto, Y. [IHP, Im Technologiepark 25, 15236 Frankfurt (Oder) (Germany); Capellini, G. [IHP, Im Technologiepark 25, 15236 Frankfurt (Oder) (Germany); Dip. di scienze, Università Roma Tre, viale G. Marconi 446, 00146 Roma (Italy)

    2015-12-21

    Tensile germanium microstrips are candidate as gain material in Si-based light emitting devices due to the beneficial effect of the strain field on the radiative recombination rate. In this work, we thoroughly investigate their radiative recombination spectra by means of micro-photoluminescence experiments at different temperatures and excitation powers carried out on samples featuring different tensile strain values. For sake of comparison, bulk Ge(001) photoluminescence is also discussed. The experimental findings are interpreted in light of a numerical modeling based on a multi-valley effective mass approach, taking in to account the depth dependence of the photo-induced carrier density and of the self-absorption effect. The theoretical modeling allowed us to quantitatively describe the observed increase of the photoluminescence intensity for increasing values of strain, excitation power, and temperature. The temperature dependence of the non-radiative recombination time in this material has been inferred thanks to the model calibration procedure.

  18. SME International Business Models: The Role of Context and Experience

    DEFF Research Database (Denmark)

    Child, John; Hsieh, Linda; Elbanna, Said

    2017-01-01

    This paper addresses two questions through a study of 180 SMEs located in contrasting industry and home country contexts. First, which business models for international markets prevail among SMEs and do they configure into different types? Second, which factors predict the international business...... models that SMEs follow? Three distinct international business models (traditional market-adaptive, technology exploiter, and ambidextrous explorer) are found among the SMEs studied. The likelihood of SMEs adopting one business model rather than another is to a high degree predictable with reference...... to a small set of factors: industry, level of home economy development, and decision-maker international experience....

  19. Modeling a set of heavy oil aqueous pyrolysis experiments

    Energy Technology Data Exchange (ETDEWEB)

    Thorsness, C.B.; Reynolds, J.G.

    1996-11-01

    Aqueous pyrolysis experiments, aimed at mild upgrading of heavy oil, were analyzed using various computer models. The primary focus of the analysis was the pressure history of the closed autoclave reactors obtained during the heating of the autoclave to desired reaction temperatures. The models used included a means of estimating nonideal behavior of primary components with regard to vapor liquid equilibrium. The modeling indicated that to match measured autoclave pressures, which often were well below the vapor pressure of water at a given temperature, it was necessary to incorporate water solubility in the oil phase and an activity model for the water in the oil phase which reduced its fugacity below that of pure water. Analysis also indicated that the mild to moderate upgrading of the oil which occurred in experiments that reached 400{degrees}C or more using a FE(III) 2-ethylhexanoate could be reasonably well characterized by a simple first order rate constant of 1.7xl0{sup 8} exp(-20000/T)s{sup {minus}l}. Both gas production and API gravity increase were characterized by this rate constant. Models were able to match the complete pressure history of the autoclave experiments fairly well with relatively simple equilibria models. However, a consistent lower than measured buildup in pressure at peak temperatures was noted in the model calculations. This phenomena was tentatively attributed to an increase in the amount of water entering the vapor phase caused by a change in its activity in the oil phase.

  20. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures.

    Science.gov (United States)

    Rettmann, Maryam E; Holmes, David R; Kwartowitz, David M; Gunawan, Mia; Johnson, Susan B; Camp, Jon J; Cameron, Bruce M; Dalegrave, Charles; Kolasa, Mark W; Packer, Douglas L; Robb, Richard A

    2014-02-01

    In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration

  1. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  2. Numerical modeling of the 2017 active seismic infrasound balloon experiment

    Science.gov (United States)

    Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.

    2017-12-01

    We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.

  3. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  4. Quantitative myocardial perfusion imaging in a porcine ischemia model using a prototype spectral detector CT system

    Science.gov (United States)

    Fahmi, Rachid; Eck, Brendan L.; Levi, Jacob; Fares, Anas; Dhanantwari, Amar; Vembar, Mani; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    We optimized and evaluated dynamic myocardial CT perfusion (CTP) imaging on a prototype spectral detector CT (SDCT) scanner. Simultaneous acquisition of energy sensitive projections on the SDCT system enabled projection-based material decomposition, which typically performs better than image-based decomposition required by some other system designs. In addition to virtual monoenergetic, or keV images, the SDCT provided conventional (kVp) images, allowing us to compare and contrast results. Physical phantom measurements demonstrated linearity of keV images, a requirement for quantitative perfusion. Comparisons of kVp to keV images demonstrated very significant reductions in tell-tale beam hardening (BH) artifacts in both phantom and pig images. In phantom images, consideration of iodine contrast to noise ratio and small residual BH artifacts suggested optimum processing at 70 keV. The processing pipeline for dynamic CTP measurements included 4D image registration, spatio-temporal noise filtering, and model-independent singular value decomposition deconvolution, automatically regularized using the L-curve criterion. In normal pig CTP, 70 keV perfusion estimates were homogeneous throughout the myocardium. At 120 kVp, flow was reduced by more than 20% on the BH-hypo-enhanced myocardium, a range that might falsely indicate actionable ischemia, considering the 0.8 threshold for actionable FFR. With partial occlusion of the left anterior descending (LAD) artery (FFR  perfusion defects at 70 keV were correctly identified in the LAD territory. At 120 kVp, BH affected the size and flow in the ischemic area; e.g. with FFR ≈ 0.65, the anterior-to-lateral flow ratio was 0.29  ±  0.01, over-estimating stenosis severity as compared to 0.42  ±  0.01 (p  CT.

  5. 3D vs 2D laparoscopic systems: Development of a performance quantitative validation model.

    Science.gov (United States)

    Ghedi, Andrea; Donarini, Erica; Lamera, Roberta; Sgroi, Giovanni; Turati, Luca; Ercole, Cesare

    2015-01-01

    The new technology ensures 3D laparoscopic vision by adding depth to the traditional two dimensions. This realistic vision gives the surgeon the feeling of operating in real space. Hospital of Treviglio-Caravaggio isn't an university or scientific institution; in 2014 a new 3D laparoscopic technology was acquired therefore it led to evaluation of the of the appropriateness in term of patient outcome and safety. The project aims at achieving the development of a quantitative validation model that would ensure low cost and a reliable measure of the performance of 3D technology versus 2D mode. In addition, it aims at demonstrating how new technologies, such as open source hardware and software and 3D printing, could help research with no significant cost increase. For these reasons, in order to define criteria of appropriateness in the use of 3D technologies, it was decided to perform a study to technically validate the use of the best technology in terms of effectiveness, efficiency and safety in the use of a system between laparoscopic vision in 3D and the traditional 2D. 30 surgeons were enrolled in order to perform an exercise through the use of laparoscopic forceps inside a trainer. The exercise consisted of having surgeons with different level of seniority, grouped by type of specialization (eg. surgery, urology, gynecology), exercising videolaparoscopy with two technologies (2D and 3D) through the use of a anthropometric phantom. The target assigned to the surgeon was that to pass "needle and thread" without touching the metal part in the shortest time possible. The rings selected for the exercise had each a coefficient of difficulty determined by depth, diameter, angle from the positioning and from the point of view. The analysis of the data collected from the above exercise has mathematically confirmed that the 3D technique ensures a learning curve lower in novice and greater accuracy in the performance of the task with respect to 2D.

  6. Quantitative Evaluation of Pain during Electrocutaneous Stimulation using a Log-Linearized Peripheral Arterial Viscoelastic Model.

    Science.gov (United States)

    Matsubara, Hiroki; Hirano, Hiroki; Hirano, Harutoyo; Soh, Zu; Nakamura, Ryuji; Saeki, Noboru; Kawamoto, Masashi; Yoshizumi, Masao; Yoshino, Atsuo; Sasaoka, Takafumi; Yamawaki, Shigeto; Tsuji, Toshio

    2018-02-15

    In clinical practice, subjective pain evaluations, e.g., the visual analogue scale and the numeric rating scale, are generally employed, but these are limited in terms of their ability to detect inaccurate reports, and are unsuitable for use in anesthetized patients or those with dementia. We focused on the peripheral sympathetic nerve activity that responds to pain, and propose a method for evaluating pain sensation, including intensity, sharpness, and dullness, using the arterial stiffness index. In the experiment, electrocardiogram, blood pressure, and photoplethysmograms were obtained, and an arterial viscoelastic model was applied to estimate arterial stiffness. The relationships among the stiffness index, self-reported pain sensation, and electrocutaneous stimuli were examined and modelled. The relationship between the stiffness index and pain sensation could be modelled using a sigmoid function with high determination coefficients, where R 2  ≥ 0.88, p < 0.01 for intensity, R 2  ≥ 0.89, p < 0.01 for sharpness, and R 2  ≥ 0.84, p < 0.01 for dullness when the stimuli could appropriately evoke dull pain.

  7. Neural network models of learning and categorization in multigame experiments

    Directory of Open Access Journals (Sweden)

    Davide eMarchiori

    2011-12-01

    Full Text Available Previous research has shown that regret-driven neural networks predict behavior in repeated completely mixed games remarkably well, substantially equating the performance of the most accurate established models of learning. This result prompts the question of what is the added value of modeling learning through neural networks. We submit that this modeling approach allows for models that are able to distinguish among and respond differently to different payoff structures. Moreover, the process of categorization of a game is implicitly carried out by these models, thus without the need of any external explicit theory of similarity between games. To validate our claims, we designed and ran two multigame experiments in which subjects faced, in random sequence, different instances of two completely mixed 2x2 games. Then, we tested on our experimental data two regret-driven neural network models, and compared their performance with that of other established models of learning and Nash equilibrium.

  8. The dynamics of protein hydration water: a quantitative comparison of molecular dynamics simulations and neutron-scattering experiments.

    Science.gov (United States)

    Tarek, M; Tobias, D J

    2000-12-01

    We present results from an extensive molecular dynamics simulation study of water hydrating the protein Ribonuclease A, at a series of temperatures in cluster, crystal, and powder environments. The dynamics of protein hydration water appear to be very similar in crystal and powder environments at moderate to high hydration levels. Thus, we contend that experiments performed on powder samples are appropriate for discussing hydration water dynamics in native protein environments. Our analysis reveals that simulations performed on cluster models consisting of proteins surrounded by a finite water shell with free boundaries are not appropriate for the study of the solvent dynamics. Detailed comparison to available x-ray diffraction and inelastic neutron-scattering data shows that current generation force fields are capable of accurately reproducing the structural and dynamical observables. On the time scale of tens of picoseconds, at room temperature and high hydration, significant water translational diffusion and rotational motion occur. At low hydration, the water molecules are translationally confined but display appreciable rotational motion. Below the protein dynamical transition temperature, both translational and rotational motions of the water molecules are essentially arrested. Taken together, these results suggest that water translational motion is necessary for the structural relaxation that permits anharmonic and diffusive motions in proteins. Furthermore, it appears that the exchange of protein-water hydrogen bonds by water rotational/librational motion is not sufficient to permit protein structural relaxation. Rather, the complete exchange of protein-bound water molecules by translational displacement seems to be required.

  9. Experiences in applying Bayesian integrative models in interdisciplinary modeling: the computational and human challenges

    DEFF Research Database (Denmark)

    Kuikka, Sakari; Haapasaari, Päivi Elisabet; Helle, Inari

    2011-01-01

    networks are flexible tools that can take into account the different research traditions and the various types of information sources. We present two types of cases. With the Baltic salmon stocks modeled with Bayesian techniques, the existing data sets are rich and the estimation of the parameters...... components, which favors the use of quantitative risk analysis. However, the traditions and quality criteria of these scientific fields are in many respects different. This creates both technical and human challenges to the modeling tasks....

  10. Dynamic Experiments and Constitutive Model Performance for Polycarbonate

    Science.gov (United States)

    2014-07-01

    Storage and loss tangent moduli for PC; DMA experiments performed at 1 Hz and shift at 100 Hz showing the  and transition regions using the...author would also like to thank Dr. Adam D. Mulliken for courteously providing the experimental results and the Abaqus version of the model and...exponential factor . In 1955, the Ree-Eyring model further accounted for microstructural mechanisms by relating molecular motions to yield behavior

  11. Mechanical Interaction in Pressurized Pipe Systems: Experiments and Numerical Models

    OpenAIRE

    Simão, Mariana; Mora-Rodriguez, Jesus; Ramos, Helena

    2015-01-01

    The dynamic interaction between the unsteady flow occurrence and the resulting vibration of the pipe are analyzed based on experiments and numerical models. Waterhammer, structural dynamic and fluid–structure interaction (FSI) are the main subjects dealt with in this study. Firstly, a 1D model is developed based on the method of characteristics (MOC) using specific damping coefficients for initial components associated with rheological pipe material behavior, structural and fluid deformation...

  12. The Comparison of Distributed P2P Trust Models Based on Quantitative Parameters in the File Downloading Scenarios

    Directory of Open Access Journals (Sweden)

    Jingpei Wang

    2016-01-01

    Full Text Available Varied P2P trust models have been proposed recently; it is necessary to develop an effective method to evaluate these trust models to resolve the commonalities (guiding the newly generated trust models in theory and individuality (assisting a decision maker in choosing an optimal trust model to implement in specific context issues. A new method for analyzing and comparing P2P trust models based on hierarchical parameters quantization in the file downloading scenarios is proposed in this paper. Several parameters are extracted from the functional attributes and quality feature of trust relationship, as well as requirements from the specific network context and the evaluators. Several distributed P2P trust models are analyzed quantitatively with extracted parameters modeled into a hierarchical model. The fuzzy inferring method is applied to the hierarchical modeling of parameters to fuse the evaluated values of the candidate trust models, and then the relative optimal one is selected based on the sorted overall quantitative values. Finally, analyses and simulation are performed. The results show that the proposed method is reasonable and effective compared with the previous algorithms.

  13. Thermal-hydraulic Experiments for Advanced Physical Model Development

    International Nuclear Information System (INIS)

    Song, Chulhwa

    2012-04-01

    The improvement of prediction models is needed to enhance the safety analysis capability through experimental database of local phenomena. To improve the two-phase interfacial area transport model, the various experiments were carried out with local two-phase interfacial structure test facilities. 2 Χ 2 and 6 Χ 6 rod bundle test facilities were used for the experiment on the droplet behavior. The experiments on the droplet behavior inside a heated rod bundle geometry. The experiments used GIRLS and JICO and CFD analysis were carried out to comprehend the local condensation of steam jet, turbulent jet induced by condensation and the thermal mixing in a pool. In order to develop a model for key phenomena of newly adapted safety system, experiments for boiling inside a pool and condensation in horizontal channel have been performed. An experimental database of the CHF (Critical Heat Flux) and PDO (Post-dryout) was constructed. The mechanism of the heat transfer enhancement by surface modifications in nano-fluid was investigated in boiling mode and rapid quenching mode. The special measurement techniques were developed. They are Double-sensor optical void probe, Optic Rod, PIV technique and UBIM system

  14. Experiments and Modeling of G-Jitter Fluid Mechanics

    Science.gov (United States)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  15. Radon transport in fractured soil. Laboratory experiments and modelling

    International Nuclear Information System (INIS)

    Hoff, A.

    1997-10-01

    Radon (Rn-222) transport in fractured soil has been investigated by laboratory experiments and by modelling. Radon transport experiments have been performed with two sand columns (homogeneous and inhomogeneous) and one undisturbed clayey till column containing a net of preferential flow paths (root holes). A numerical model (the finite-element model FRACTRAN) and an analytic model (a pinhole model) have been applied in simulations if soil gas and radon transport in fractured soil. Experiments and model calculations are included in a discussion of radon entry rates into houses placed on fractured soil. The main conclusion is, that fractures does not in general alter transport of internally generated radon out of soil, when the pressure and flow conditions in the soil is comparable to the conditions prevailing under a house. This indicates the important result, that fractures in soil have no impact on radon entry into a house beyond that of an increased gas permeability, but a more thorough investigation of this subject is needed. Only in the case where the soil is exposed to large pressure gradients, relative to gradients induced by a house, may it be possible to observe effects of radon exchange between fractures and matrix. (au) 52 tabs., 60 ill., 5 refs

  16. First experiments results about the engineering model of Rapsodie

    International Nuclear Information System (INIS)

    Chalot, A.; Ginier, R.; Sauvage, M.

    1964-01-01

    This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr

  17. Design of spatial experiments: Model fitting and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  18. The LHCf experiment modelling cosmic rays at LHC

    CERN Document Server

    Tricomi, A; Bonechi, L; Bongi, M; Castellini, G; D'Alessandro, R; Faus, A; Fukui, K; Haguenauer, M; Itow, Y; Kasahara, K; Macina, D; Mase, T; Masuda, K; Matsubara, Y; Mizuishi, M; Menjo, H; Muraki, Y; Papini, P; Perrot, A L; Ricciarini, S B; Sako, T; Shimizu, Y; Tamura, T; Taki, K; Torii, S; Tricomi, A; Turner, W C; Velasco, J; Watanabe, H; Yoshida, K

    2008-01-01

    The LHCf experiment at LHC has been designed to provide a calibration of nuclear interaction models used in cosmic ray physics up to energies relevant to test the region between the knee and the GZK cut-off. Details of the detector and its performances are discussed.

  19. Developing a new transformatory cultural tourism experience model / Milena Ivanovic

    OpenAIRE

    Ivanovic, Milena

    2014-01-01

    The research question addressed by this thesis is: To what degree the results of the statistical analysis will corroborate the main theoretical assumptions of the proposed theoretical model of new authentic transformatory cultural tourism experience as transmodern phenomenon of equality of two Cartesian levels of reality, material (objective authenticity) and experiential (constructive authenticity) in informing the intrapersonal existential authenticity as outcome transformato...

  20. Design of Experiments, Model Calibration and Data Assimilation

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  1. Wind Tunnel Experiments with Active Control of Bridge Section Model

    DEFF Research Database (Denmark)

    Hansen, Henriette I.; Thoft-Christensen, Palle

    This paper describes results of wind tunnel experiments with a bridge section model where movable flaps are integrated in the bridge girder so each flap is the streamlined part of the edge of the girder. This active control flap system is patented by COWIconsult and may be used to increase...

  2. Women's experiences of infertility - towards a relational model of care.

    Science.gov (United States)

    Cunningham, Nicola; Cunningham, Tom

    2013-12-01

    To consider the effectiveness of current models of patient-centred infertility care. Patient centredness is defined as one of six key dimensions of quality of care. In the field of infertility, a new interaction model of patient-centred infertility care is proposed. Despite positive moves, this model reveals shortcomings in knowledge about the lived experience of infertility and lacks the shift in attitudes and approach that effective patient-centred care requires. The study has a qualitative research design. Nine women living with and through infertility participated in online life-story interviews. Data were analysed using a layered strategy influenced by the voice-centred relational method, emphasising narrative content, form and function. Women reveal a complex experience. Three key themes were found: Approaching the clinic narratives are infused with personal expectations while deeply reflective of cultural expectations and social norms. Relatedness recognises women's experiences cannot be neatly separated into distinct domains. Liminality and infertility describes women's experiences lost in transition through and beyond infertility treatment. The current model of patient-centred infertility care requires further development. Women in this study found themselves lost in transition and irrespective of treatment failure or success. Conceptual development must embrace a relational understanding of patient's experience to ensure that patient-centred infertility care is realistic and relevant to patients, clinical staff and the system as a whole. Psychosocial skills are recognised as core competences for fertility nurses. A relational conceptualisation of patient's experiences, living with and through infertility, provides further information for the development of staff and enhanced knowledge and practice skills. © 2013 John Wiley & Sons Ltd.

  3. Study on the quantitative relationship between Agricultural water and fertilization process and non-point source pollution based on field experiments

    Science.gov (United States)

    Wang, H.; Chen, K.; Wu, Z.; Guan, X.

    2017-12-01

    In recent years, with the prominent of water environment problem and the relative increase of point source pollution governance, especially the agricultural non-point source pollution problem caused by the extensive use of fertilizers and pesticides has become increasingly aroused people's concern and attention. In order to reveal the quantitative relationship between agriculture water and fertilizer and non-point source pollution, on the basis of elm field experiment and combined with agricultural drainage irrigation model, the agricultural irrigation water and the relationship between fertilizer and fertilization scheme and non-point source pollution were analyzed and calculated by field emission intensity index. The results show that the variation of displacement varies greatly under different irrigation conditions. When the irrigation water increased from 22cm to 42cm, the irrigation water increased by 20 cm while the field displacement increased by 11.92 cm, about 66.22% of the added value of irrigation water. Then the irrigation water increased from 42 to 68, irrigation water increased 26 cm, and the field displacement increased by 22.48 cm, accounting for 86.46% of irrigation water. So there is an "inflection point" between the irrigation water amount and field displacement amount. The load intensity increases with the increase of irrigation water and shows a significant power correlation. Under the different irrigation condition, the increase amplitude of load intensity with the increase of irrigation water is different. When the irrigation water is smaller, the load intensity increase relatively less, and when the irrigation water increased to about 42 cm, the load intensity will increase considerably. In addition, there was a positive correlation between the fertilization and load intensity. The load intensity had obvious difference in different fertilization modes even with same fertilization level, in which the fertilizer field unit load intensity

  4. Women's opinion on the justification of physical spousal violence: A quantitative approach to model the most vulnerable households in Bangladesh.

    Directory of Open Access Journals (Sweden)

    Raaj Kishore Biswas

    Full Text Available Bangladesh is a culturally conservative nation with limited freedom for women. A number of studies have evaluated intimate partner violence (IPV and spousal physical violence in Bangladesh; however, the views of women have been rarely discussed in a quantitative manner. Three nationwide surveys in Bangladesh (2007, 2011, and 2014 were analyzed in this study to characterize the most vulnerable households, where women themselves accepted spousal physical violence as a general norm. 31.3%, 31.9% and 28.7% women in the surveys found justification for physical violence in household in 2007, 2011 and 2014 respectively. The binary logistic model showed wealth index, education of both women and their partner, religion, geographical division, decision making freedom and marital age as significant household contributors for women's perspective in all the three years. Women in rich households and the highly educated were found to be 40% and 50% less likely to accept domestic physical violence compared to the poorest and illiterate women. Similarly, women who got married before 18 years were 20% more likely accept physical violence in the family as a norm. Apart from these particular groups (richest, highly educated and married after 18 years, other groups had around 30% acceptance rate of household violence. For any successful attempt to reduce spousal physical violence in the traditional patriarchal society of Bangladesh, interventions must target the most vulnerable households and the geographical areas where women experience spousal violence. Although this paper focuses on women's attitudes, it is important that any intervention scheme should be devised to target both men and women.

  5. Women's opinion on the justification of physical spousal violence: A quantitative approach to model the most vulnerable households in Bangladesh.

    Science.gov (United States)

    Biswas, Raaj Kishore; Rahman, Nusma; Kabir, Enamul; Raihan, Farabi

    2017-01-01

    Bangladesh is a culturally conservative nation with limited freedom for women. A number of studies have evaluated intimate partner violence (IPV) and spousal physical violence in Bangladesh; however, the views of women have been rarely discussed in a quantitative manner. Three nationwide surveys in Bangladesh (2007, 2011, and 2014) were analyzed in this study to characterize the most vulnerable households, where women themselves accepted spousal physical violence as a general norm. 31.3%, 31.9% and 28.7% women in the surveys found justification for physical violence in household in 2007, 2011 and 2014 respectively. The binary logistic model showed wealth index, education of both women and their partner, religion, geographical division, decision making freedom and marital age as significant household contributors for women's perspective in all the three years. Women in rich households and the highly educated were found to be 40% and 50% less likely to accept domestic physical violence compared to the poorest and illiterate women. Similarly, women who got married before 18 years were 20% more likely accept physical violence in the family as a norm. Apart from these particular groups (richest, highly educated and married after 18 years), other groups had around 30% acceptance rate of household violence. For any successful attempt to reduce spousal physical violence in the traditional patriarchal society of Bangladesh, interventions must target the most vulnerable households and the geographical areas where women experience spousal violence. Although this paper focuses on women's attitudes, it is important that any intervention scheme should be devised to target both men and women.

  6. Turkish experience with the use of IAEA planning models

    International Nuclear Information System (INIS)

    Fikret, H.

    1997-01-01

    Most of the IAEA planning methodologies for energy and electricity planning have been transferred to Turkey as part of Technical Co-operation projects on the subject matter. The transfer has been supplemented by adequate training to national experts through their participation in the above projects and in the training courses on these models organized by the IAEA. The experience gathered in the use of these models in Turkey is described in this paper, highlighting how the models are imbedded in the country's planning procedure for energy and electricity matters. (author). 7 figs, 6 tabs

  7. PARAMETRIC MODELING, CREATIVITY, AND DESIGN: TWO EXPERIENCES WITH ARCHITECTURE’ STUDENTS

    Directory of Open Access Journals (Sweden)

    Wilson Florio

    2012-02-01

    Full Text Available The aim of this article is to reflect on the use of the parametric modeling in two didactic experiences. The first experiment involved resources of the Paracloud program and its relation with the Rhinoceros program, that resulted in the production of physical models produced with the aid of the laser cutting. In the second experiment, the students had produced algorithms in the Grasshopper, resulting in families of structures and coverings. The study objects are both the physical models and digital algorithms resultants from this experimentation. For the analysis and synthesis of the results, we adopted four important assumptions: 1. the value of attitudes and environment of work; 2. the importance of experimentation and improvisation; 3. understanding of the design process as a situated act and as a ill-defined problem; 4. the inclusion of creative and critical thought in the disciplines. The results allow us to affirm that the parametric modeling stimulates creativity, therefore allowing combination of different parameters, that result in unexpected discoveries. Keywords: Teach-Learning, Parametric Modeling, Laser Cutter, Grasshopper, Design Process, Creativity.

  8. Modelling the Grimsel migration field experiments at PSI

    International Nuclear Information System (INIS)

    Heer, W.

    1997-01-01

    For several years tracer migration experiments have been performed at Nagra's Grimsel Test Site as a joint undertaking of Nagra, PNC and PSI. The aims of modelling the migration experiments are (1) to better understand the nuclide transport through crystalline rock; (2) to gain information on validity of methods and correlating parameters; (3) to improve models for safety assessments. The PSI modelling results, presented here, show a consistent picture for the investigated tracers (the non-sorbing uranine, the weakly sorbing sodium, the moderately sorbing strontium and the more strongly sorbing cesium). They represent an important step in building up confidence in safety assessments for radioactive waste repositories. (author) 5 figs., 1 tab., 12 refs

  9. Quantitative rainfall metrics for comparing volumetric rainfall retrievals to fine scale models

    Science.gov (United States)

    Collis, Scott; Tao, Wei-Kuo; Giangrande, Scott; Fridlind, Ann; Theisen, Adam; Jensen, Michael

    2013-04-01

    Precipitation processes play a significant role in the energy balance of convective systems for example, through latent heating and evaporative cooling. Heavy precipitation "cores" can also be a proxy for vigorous convection and vertical motions. However, comparisons between rainfall rate retrievals from volumetric remote sensors with forecast rain fields from high-resolution numerical weather prediction simulations are complicated by differences in the location and timing of storm morphological features. This presentation will outline a series of metrics for diagnosing the spatial variability and statistical properties of precipitation maps produced both from models and retrievals. We include existing metrics such as Contoured by Frequency Altitude Diagrams (Yuter and Houze 1995) and Statistical Coverage Products (May and Lane 2009) and propose new metrics based on morphology, cell and feature based statistics. Work presented focuses on observations from the ARM Southern Great Plains radar network consisting of three agile X-Band radar systems with a very dense coverage pattern and a C Band system providing site wide coverage. By combining multiple sensors resolutions of 250m2 can be achieved, allowing improved characterization of fine-scale features. Analyses compare data collected during the Midlattitude Continental Convective Clouds Experiment (MC3E) with simulations of observed systems using the NASA Unified Weather Research and Forecasting model. May, P. T., and T. P. Lane, 2009: A method for using weather radar data to test cloud resolving models. Meteorological Applications, 16, 425-425, doi:10.1002/met.150, 10.1002/met.150. Yuter, S. E., and R. A. Houze, 1995: Three-Dimensional Kinematic and Microphysical Evolution of Florida Cumulonimbus. Part II: Frequency Distributions of Vertical Velocity, Reflectivity, and Differential Reflectivity. Mon. Wea. Rev., 123, 1941-1963, doi:10.1175/1520-0493(1995)1232.0.CO;2.

  10. Models of water imbibition in untreated and treated porous media validated by quantitative magnetic resonance imaging

    Science.gov (United States)

    Gombia, M.; Bortolotti, V.; Brown, R. J. S.; Camaiti, M.; Fantazzini, P.

    2008-05-01

    Fluid imbibition affects almost every activity that directly or indirectly involves porous media, including oil reservoir rocks, soils, building materials, and countless others, including biological materials. In this paper, magnetic resonance imaging (MRI) has been applied to study water imbibition in a porous medium, in which capillary properties are artificially changed. As a model system, samples of Lecce stone, a material of cultural heritage interest, were analyzed before and after treatment with a protective polymer (Silirain-50 or Paraloid PB72). By using MRI, we can visualize the presence of water inside each sample and measure the height z(t ) reached by the wetting front as a function of time during experiments of capillary absorption before and after treatment. The sorptivity S, defined as the initial slope of z versus t1/2, has been determined before treatment and through both treated and untreated faces after treatment. Very good fits to the data were obtained with theoretical and empirical models of absorption kinetics, starting from the Washburn model for capillary rise, adapted by others to homogeneous porous media, and modified by us for application to a sample having a thin low-permeability layer on either surface as a result of a treatment process. This gives us parameters to quantify the effects on imbibition of the changes in the capillary properties. It is known that the Paraloid treatment preferentially affects the larger pore channels and the Silirain the smaller, and our results show this and illustrate the roles played by the different classes of pore sizes.

  11. Discrete fracture modelling for the Stripa tracer validation experiment predictions

    International Nuclear Information System (INIS)

    Dershowitz, W.; Wallmann, P.

    1992-02-01

    Groundwater flow and transport through three-dimensional networks of discrete fractures was modeled to predict the recovery of tracer from tracer injection experiments conducted during phase 3 of the Stripa site characterization and validation protect. Predictions were made on the basis of an updated version of the site scale discrete fracture conceptual model used for flow predictions and preliminary transport modelling. In this model, individual fractures were treated as stochastic features described by probability distributions of geometric and hydrologic properties. Fractures were divided into three populations: Fractures in fracture zones near the drift, non-fracture zone fractures within 31 m of the drift, and fractures in fracture zones over 31 meters from the drift axis. Fractures outside fracture zones are not modelled beyond 31 meters from the drift axis. Transport predictions were produced using the FracMan discrete fracture modelling package for each of five tracer experiments. Output was produced in the seven formats specified by the Stripa task force on fracture flow modelling. (au)

  12. Study on quantitative risk assessment model of the third party damage for natural gas pipelines based on fuzzy comprehensive assessment

    International Nuclear Information System (INIS)

    Qiu, Zeyang; Liang, Wei; Lin, Yang; Zhang, Meng; Wang, Xue

    2017-01-01

    As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor. (paper)

  13. Social Experiences of Beginning Braille Readers in Literacy Activities: Qualitative and Quantitative Findings of the ABC Braille Study

    Science.gov (United States)

    Sacks, Sharon Z.; Kamei-Hannan, Cheryl; Erin, Jane N.; Barclay, Lizbeth; Sitar, Debbie

    2009-01-01

    This mixed-design investigation examined the social experiences of beginning braille readers who were initially taught contracted or alphabetic braille in literacy activities as part of the ABC Braille Study. No differences in the quality or quantity of social experiences were found between the two groups over time. (Contains 4 tables.)

  14. Quantitative structure-activity relationship modeling of polycyclic aromatic hydrocarbon mutagenicity by classification methods based on holistic theoretical molecular descriptors.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Marrocchi, Assunta; Minuti, Lucio; Taticchi, Aldo

    2007-03-01

    Various polycyclic aromatic hydrocarbons (PAHs), ubiquitous environmental pollutants, are recognized mutagens and carcinogens. A homogeneous set of mutagenicity data (TA98 and TA100,+S9) for 32 benzocyclopentaphenanthrenes/chrysenes was modeled by the quantitative structure-activity relationship classification methods k-nearest neighbor and classification and regression tree, using theoretical holistic molecular descriptors. Genetic algorithm provided the selection of the best subset of variables for modeling mutagenicity. The models were validated by leave-one-out and leave-50%-out approaches and have good performance, with sensitivity and specificity ranges of 90-100%. Mutagenicity assessment for these PAHs requires only a few theoretical descriptors of their molecular structure.

  15. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system.

    Science.gov (United States)

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.

  16. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  17. Quantitative modelling of the degradation processes of cement grout. Project CEMMOD

    Energy Technology Data Exchange (ETDEWEB)

    Grandia, Fidel; Galindez, Juan-Manuel; Arcos, David; Molinero, Jorge (Amphos21 Consulting S.L., Barcelona (Spain))

    2010-05-15

    Grout cement is planned to be used in the sealing of water-conducting fractures in the deep geological storage of spent nuclear fuel waste. The integrity of such cementitious materials should be ensured in a time framework of decades to a hundred of years as mimum. However, their durability must be quantified since grout degradation may jeopardize the stability of other components in the repository due to the potential release of hyperalkaline plumes. The model prediction of the cement alteration has been challenging in the last years mainly due to the difficulty to reproduce the progressive change in composition of the Calcium-Silicate-Hydrate (CSH) compounds as the alteration proceeds. In general, the data obtained from laboratory experiments show a rather similar dependence between the pH of pore water and the Ca-Si ratio of the CSH phases. The Ca-Si ratio decreases as the CSH is progressively replaced by Si-enriched phases. An elegant and reasonable approach is the use of solid solution models even keeping in mind that CSH phases are not crystalline solids but gels. An additional obstacle is the uncertainty in the initial composition of the grout to be considered in the calculations because only the recipe of low-pH clinker is commonly provided by the manufacturer. The hydration process leads to the formation of new phases and, importantly, creates porosity. A number of solid solution models have been reported in literature. Most of them assumed a strong non-ideal binary solid solution series to account for the observed changes in the Ca-Si ratios in CSH. However, it results very difficult to reproduce the degradation of the CSH in the whole Ca-Si range of compositions (commonly Ca/Si=0.5-2.5) by considering only two end-members and fixed nonideality parameters. Models with multiple non-ideal end-members with interaction parameters as a function of the solid composition can solve the problem but these can not be managed in the existing codes of reactive

  18. Thermal experiments in the model of ADS target

    International Nuclear Information System (INIS)

    Alexander, Efanov; Yuri, Orlov; Alexander, Sorokin; Eugeni, Ivanov; Galina, Bogoslovskaia; Ning, Li

    2002-01-01

    The paper presents thermal experiments performed in the SSC RF IPPE on the ADS window target model. Brief description of the model, specific features of structure, measurement system and some methodological approaches are presented. Eutectic lead-bismuth alloy is modeled here by eutectic sodium-potassium alloy. The following characteristics of the target model were measured directly and estimated by processing: coolant flow rate, model power, absolute temperature of the coolant with a distance from the membrane of the target, absolute temperature of the membrane surface, mean square value and pulsating component of coolant temperature, as well as membrane temperature. Measurements have shown a great pulsations of temperature existing at the membrane surface that must be taken into account in analysis of strength of real target system. Experimental temperature fields (present work) and velocity fields measured earlier make up a complete database for verification of 2D and 3D thermohydraulic codes. (author)

  19. E-health stakeholders experiences with clinical modelling and standardizations.

    Science.gov (United States)

    Gøeg, Kirstine Rosenbeck; Elberg, Pia Britt; Højen, Anne Randorff

    2015-01-01

    Stakeholders in e-health such as governance officials, health IT-implementers and vendors have to co-operate to achieve the goal of a future-proof interoperable e-health infrastructure. Co-operation requires knowledge on the responsibility and competences of stakeholder groups. To increase awareness on clinical modeling and standardization we conducted a workshop for Danish and a few Norwegian e-health stakeholders' and made them discuss their views on different aspects of clinical modeling using a theoretical model as a point of departure. Based on the model, we traced stakeholders' experiences. Our results showed there was a tendency that stakeholders were more familiar with e-health requirements than with design methods, clinical information models and clinical terminology as they are described in the scientific literature. The workshop made it possible for stakeholders to discuss their roles and expectations to each other.

  20. The Nucleation and Propagation of Thrust Ramps: Insights from Quantitative Analysis of Frictional Analog (Sandbox) Models

    Science.gov (United States)

    Sen, P.; Haq, S. S.; Marshak, S.

    2012-12-01

    Particle Imaging Velocimetry (PIV) provides a unique opportunity to analyze deformation in sandbox analog models at a scale that allows documentation of movement within and around individual shear structures. We employed PIV analysis to quantify deformation in sandbox experiments designed to simulate the initiation of thrust ramps developed during crustal shortening (i.e., contractional deformation). Our intent was to answer a long-standing question: Do ramps initiate at the tip of a detachment, or do they initiate in the interior of a deforming layer and propagate up-dip and down-dip until they link to the detachment at a location to the hinterland of the detachment's tip line? Most geometric studies of ramp-flat geometries in fold-thrust belts assume that ramps propagate up-dip from the tip of the detachment, and grow only in one direction. Field studies, in contrast, reveal that layer-parallel shortening structures develop to the foreland of the last ramp to form, suggesting that ramps initiate in a thrust sheet that has already undergone displacement above a detachment. Published sandbox models, using color-sand marker layers, support this idea. To test this idea further, we set up a model using a 3 m-long by 0.31-m wide glass-walled sandbox with a rigid backstop. The sand layer was sifted onto a sheet of mylar that could be pulled beneath the rigid backstop. Sand used in our experiments consisted of <250 μm-diameter grains. We carried out multiple runs using 4 cm, 5 cm and 6 cm-thick layers. Images were acquired over 1 mm displacement intervals using an 18 mega-pixel camera. By moving the camera at specific steps during the experiment, we sampled the development of several thrust ramps. The images taken during experimental runs were analyzed with a MATLAB-based program called 'PIV LAB' that utilizes an image cross-correlation subroutine to determine displacement fields of the sand particles. Our results demonstrate that: (1) thrust ramps initiate within the