WorldWideScience

Sample records for modeling quantitative experiments

  1. Quantitative modelling of HDPE spurt experiments using wall slip and generalised Newtonian flow

    NARCIS (Netherlands)

    Doelder, den C.F.J.; Koopmans, R.J.; Molenaar, J.

    1998-01-01

    A quantitative model to describe capillary rheometer experiments is presented. The model can generate ‘two-branched' discontinuous flow curves and the associated pressure oscillations. Polymer compressibility in the barrel, incompressible axisymmetric generalised Newtonian flow in the die, and a

  2. Quantitative nature of overexpression experiments

    Science.gov (United States)

    Moriya, Hisao

    2015-01-01

    Overexpression experiments are sometimes considered as qualitative experiments designed to identify novel proteins and study their function. However, in order to draw conclusions regarding protein overexpression through association analyses using large-scale biological data sets, we need to recognize the quantitative nature of overexpression experiments. Here I discuss the quantitative features of two different types of overexpression experiment: absolute and relative. I also introduce the four primary mechanisms involved in growth defects caused by protein overexpression: resource overload, stoichiometric imbalance, promiscuous interactions, and pathway modulation associated with the degree of overexpression. PMID:26543202

  3. Quantitative reactive modeling and verification.

    Science.gov (United States)

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  4. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  5. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  6. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  7. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  8. Experiment selection for the discrimination of semi-quantitative models of dynamical systems

    NARCIS (Netherlands)

    Vatcheva, [No Value; de Jong, H; Bernard, O; Mars, NJI

    Modeling an experimental system often results in a number of alternative models that are all justified by the available experimental data. To discriminate among these models, additional experiments are needed. Existing methods for the selection of discriminatory experiments in statistics and in

  9. A quantitative experiment on the fountain effect in superfluid helium

    Science.gov (United States)

    Amigó, M. L.; Herrera, T.; Neñer, L.; Peralta Gavensky, L.; Turco, F.; Luzuriaga, J.

    2017-09-01

    Superfluid helium, a state of matter existing at low temperatures, shows many remarkable properties. One example is the so called fountain effect, where a heater can produce a jet of helium. This converts heat into mechanical motion; a machine with no moving parts, but working only below 2 K. Allen and Jones first demonstrated the effect in 1938, but their work was basically qualitative. We now present data of a quantitative version of the experiment. We have measured the heat supplied, the temperature and the height of the jet produced. We also develop equations, based on the two-fluid model of superfluid helium, that give a satisfactory fit to the data. The experiment has been performed by advanced undergraduate students in our home institution, and illustrates in a vivid way some of the striking properties of the superfluid state.

  10. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  11. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  12. Operations management research methodologies using quantitative modeling

    NARCIS (Netherlands)

    Bertrand, J.W.M.; Fransoo, J.C.

    2002-01-01

    Gives an overview of quantitative model-based research in operations management, focusing on research methodology. Distinguishes between empirical and axiomatic research, and furthermore between descriptive and normative research. Presents guidelines for doing quantitative model-based research in

  13. Design and optimization of reverse-transcription quantitative PCR experiments.

    Science.gov (United States)

    Tichopad, Ales; Kitchen, Rob; Riedmaier, Irmgard; Becker, Christiane; Ståhlberg, Anders; Kubista, Mikael

    2009-10-01

    Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. .

  14. Discrimination of Semi-Quantitative Models by Experiment Selection: Method Application in Population Biology

    NARCIS (Netherlands)

    Vatcheva, Ivayla; Bernard, Olivier; de Jong, Hidde; Gouze, Jean-Luc; Mars, Nicolaas; Nebel, B.

    2001-01-01

    Modeling an experimental system often results in a number of alternative models that are justified equally well by the experimental data. In order to discriminate between these models, additional experiments are needed. We present a method for the discrimination of models in the form of

  15. 基于AHP的作战实验综合量化评估模型%General Quantitative Evaluating Model of Operational Experiment Based on AHP

    Institute of Scientific and Technical Information of China (English)

    顾亚; 吴从晖; 陆志斌; 李路遥; 盖森

    2017-01-01

    Operational experiment is an important method on researching warfare , and the evaluation of operational experi-ment is an important step which ensures the scientificalness and effectiveness of its conclusion .On the base of researching correlative theories such as AHP and the evaluation to operational experiment , this thesis puts forward a general quantitative evaluating model of operational experiment .The model accomplishes weight-distribution to operational experiment evaluation index system based on AHP , achieves nondimensionalization and normalization to isomeric result of indexes by designing nor -malized functions, and educes general evaluating result with the help of quantitative formula .In the end, a case validates the effectiveness and practicability of model sufficiently .%作战实验是研究战争的重要手段,作战实验评估是确保作战实验结论科学性、有效性的重要环节.在研究作战实验评估、数据分析等相关理论的基础上,针对作战实验评估指标结果异构、评估结论分散等问题,提出了一种作战实验综合量化评估模型,该模型基于层次分析法实现了对评估指标体系的权重分配,构建指标归一化算法实现了对异构的指标结果数据的无量纲和归一化处理,并通过定量计算公式得出作战实验的综合评估结果,最后的案例分析充分证明了模型的有效性和实用性.

  16. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  17. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  18. [Teaching quantitative methods in public health: the EHESP experience].

    Science.gov (United States)

    Grimaud, Olivier; Astagneau, Pascal; Desvarieux, Moïse; Chambaud, Laurent

    2014-01-01

    Many scientific disciplines, including epidemiology and biostatistics, are used in the field of public health. These quantitative sciences are fundamental tools necessary for the practice of future professionals. What then should be the minimum quantitative sciences training, common to all future public health professionals? By comparing the teaching models developed in Columbia University and those in the National School of Public Health in France, the authors recognize the need to adapt teaching to the specific competencies required for each profession. They insist that all public health professionals, whatever their future career, should be familiar with quantitative methods in order to ensure that decision-making is based on a reflective and critical use of quantitative analysis.

  19. Undergraduate surgical nursing preparation and guided operating room experience: A quantitative analysis.

    Science.gov (United States)

    Foran, Paula

    2016-01-01

    The aim of this research was to determine if guided operating theatre experience in the undergraduate nursing curricula enhanced surgical knowledge and understanding of nursing care provided outside this specialist area in the pre- and post-operative surgical wards. Using quantitative analyses, undergraduate nurses were knowledge tested on areas of pre- and post-operative surgical nursing in their final semester of study. As much learning occurs in nurses' first year of practice, participants were re-tested again after their Graduate Nurse Program/Preceptorship year. Participants' results were compared to the model of operating room education they had participated in to determine if there was a relationship between the type of theatre education they experienced (if any) and their knowledge of surgical ward nursing. Findings revealed undergraduates nurses receiving guided operating theatre experience had a 76% pass rate compared to 56% with non-guided or no experience (p nurses achieved a 100% pass rate compared to 53% with non-guided or no experience (p research informs us that undergraduate nurses achieve greater learning about surgical ward nursing via guided operating room experience as opposed to surgical ward nursing experience alone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Scientific aspects of urolithiasis: quantitative stone analysis and crystallization experiments

    International Nuclear Information System (INIS)

    Wandt, M.A.E.

    1986-03-01

    The theory, development and results of three quantitative analytical procedures are described and the crystallization experiments in a rotary evaporator are presented. Of the different methods of quantitative X-ray powder diffraction analyses, the 'internal standard method' and a microanalytical technique were identified as the two most useful procedures for the quantitative analysis of urinary calculi. 'Reference intensity ratios' for 6 major stone phases were determined and were used in the analysis of 20 calculi by the 'internal standard method'. Inductively coupled plasma atomic emission spectroscopic (ICP-AES) methods were also investigated, developed and used in this study. Various procedures for the digestion of calculi were tested and a mixture of HNO 3 and HC1O 4 was eventually found to be the most successful. The major elements Ca, Mg, and P in 41 calculi were determined. For the determination of trace elements, a new microwave-assisted digestion procedure was developed and used for the digestion of 100 calculi. Fluoride concentrations in two stone collections were determined using a fluoride-ion sensitive electrode and the HNO 3 /HC1O 4 digestion prodecure used for the ICP study. A series of crystallization experiments involving a standard reference artificial urine was carried out in a rotary evaporator. The effect of pH and urine composition was studied by varying the former and by including uric acid, urea, creatinine, MgO, methylene blue, chondroitin sulphate A, and fluoride in the reference solution. Crystals formed in these experiments were subjected to qualitative and semi-quantitative X-ray powder diffraction analyses. Scanning electron microscopy of several deposits was also carried out. Similar deposits to those observed in calculi were obtained with the fast evaporator. The results presented suggest that this system provides a simple, yet very useful means for studying the crystallization characteristics of urine solutions

  1. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  2. Quantitative Analogue Experimental Sequence Stratigraphy : Modelling landscape evolution and sequence stratigraphy of river-shelf sedimentary systems by quantitative analogue experiments

    NARCIS (Netherlands)

    Heijst, Maximiliaan Wilhelmus Ignatius Maria van

    2000-01-01

    This thesis reports a series of flume tank experiments that were conducted to model the stratigraphic evolution of river-delta systems. Chapter 1 introduces the river-delta sedimentary system that is subject of modelling. The chapter also includes an overview of previous research and the summary and

  3. A quantitative speciation model for the adsorption of organic pollutants on activated carbon.

    Science.gov (United States)

    Grivé, M; García, D; Domènech, C; Richard, L; Rojo, I; Martínez, X; Rovira, M

    2013-01-01

    Granular activated carbon (GAC) is commonly used as adsorbent in water treatment plants given its high capacity for retaining organic pollutants in aqueous phase. The current knowledge on GAC behaviour is essentially empirical, and no quantitative description of the chemical relationships between GAC surface groups and pollutants has been proposed. In this paper, we describe a quantitative model for the adsorption of atrazine onto GAC surface. The model is based on results of potentiometric titrations and three types of adsorption experiments which have been carried out in order to determine the nature and distribution of the functional groups on the GAC surface, and evaluate the adsorption characteristics of GAC towards atrazine. Potentiometric titrations have indicated the existence of at least two different families of chemical groups on the GAC surface, including phenolic- and benzoic-type surface groups. Adsorption experiments with atrazine have been satisfactorily modelled with the geochemical code PhreeqC, assuming that atrazine is sorbed onto the GAC surface in equilibrium (log Ks = 5.1 ± 0.5). Independent thermodynamic calculations suggest a possible adsorption of atrazine on a benzoic derivative. The present work opens a new approach for improving the adsorption capabilities of GAC towards organic pollutants by modifying its chemical properties.

  4. An Inside View: The Utility of Quantitative Observation in Understanding College Educational Experiences

    Science.gov (United States)

    Campbell, Corbin M.

    2017-01-01

    This article describes quantitative observation as a method for understanding college educational experiences. Quantitative observation has been used widely in several fields and in K-12 education, but has had limited application to research in higher education and student affairs to date. The article describes the central tenets of quantitative…

  5. Mode I Failure of Armor Ceramics: Experiments and Modeling

    Science.gov (United States)

    Meredith, Christopher; Leavy, Brian

    2017-06-01

    The pre-notched edge on impact (EOI) experiment is a technique for benchmarking the damage and fracture of ceramics subjected to projectile impact. A cylindrical projectile impacts the edge of a thin rectangular plate with a pre-notch on the opposite edge. Tension is generated at the notch tip resulting in the initiation and propagation of a mode I crack back toward the impact edge. The crack can be quantitatively measured using an optical method called Digital Gradient Sensing, which measures the crack-tip deformation by simultaneously quantifying two orthogonal surface slopes via measuring small deflections of light rays from a specularly reflective surface around the crack. The deflections in ceramics are small so the high speed camera needs to have a very high pixel count. This work reports on the results from pre-crack EOI experiments of SiC and B4 C plates. The experimental data are quantitatively compared to impact simulations using an advanced continuum damage model. The Kayenta ceramic model in Alegra will be used to compare fracture propagation speeds, bifurcations and inhomogeneous initiation of failure will be compared. This will provide insight into the driving mechanisms required for the macroscale failure modeling of ceramics.

  6. Interpretation of experiments and modeling of internal strains in Beryllium using a polycrystal model

    International Nuclear Information System (INIS)

    Tome, C.; Bourke, M.A.M.; Daymond, M.R.

    2000-01-01

    The elastic and plastic anisotropy of Be have been examined during a uniaxial compression test, by in-situ monitoring in a pulsed neutron beam. Comparisons between the measured hkil strains and the predictions from an elasto-plastic self-consistent (EPSC) model are made. Agreement is qualitatively correct for most planes in the elasto-plastic regime. Possible mechanisms responsible for the quantitative discrepancies between model and experiment are discussed

  7. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    Science.gov (United States)

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  8. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  9. Quantitative predictions from competition theory with incomplete information on model parameters tested against experiments across diverse taxa

    OpenAIRE

    Fort, Hugo

    2017-01-01

    We derive an analytical approximation for making quantitative predictions for ecological communities as a function of the mean intensity of the inter-specific competition and the species richness. This method, with only a fraction of the model parameters (carrying capacities and competition coefficients), is able to predict accurately empirical measurements covering a wide variety of taxa (algae, plants, protozoa).

  10. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  11. Modeling with Young Students--Quantitative and Qualitative.

    Science.gov (United States)

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  12. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  13. Attenuation of surface waves in porous media: Shock wave experiments and modelling

    NARCIS (Netherlands)

    Chao, G.E; Smeulders, D.M.J.; Dongen, van M.E.H.

    2005-01-01

    In this project we conduct experimental and numerical investigations on the attenuation mechanisms of surface waves in poroelastic materials. Viscous dissipation effects are modelled in the framework of Biot's theory. The experiments are performed using a shock tube technique. Quantitative agreement

  14. Quantitative stress measurement of elastic deformation using mechanoluminescent sensor: An intensity ratio model

    Science.gov (United States)

    Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng

    2018-04-01

    The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.

  15. Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H E; Schober, H; Gonzalez, M A [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F J; Fayos, R; Dawidowski, J [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M A; Vieira, S [Universidad Autonoma de Madrid (Spain)

    1997-04-01

    The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.

  16. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong; Zhao, Weishu; Chang, Frank; Dyer, Steve

    2013-01-01

    Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  17. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  18. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  19. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography

    DEFF Research Database (Denmark)

    Precht, Helle; Kitslaar, Pieter H.; Broersen, Alexander

    2017-01-01

    Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model- based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) im- ages on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods...

  20. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  1. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  2. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  3. A Review of Quantitative Situation Assessment Models for Nuclear Power Plant Operators

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Situation assessment is the process of developing situation awareness and situation awareness is defined as 'the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.' Situation awareness is an important element influencing human actions because human decision making is based on the result of situation assessment or situation awareness. There are many models for situation awareness and those models can be categorized into qualitative or quantitative. As the effects of some input factors on situation awareness can be investigated through the quantitative models, the quantitative models are more useful for the design of operator interfaces, automation strategies, training program, and so on, than the qualitative models. This study presents the review of two quantitative models of situation assessment (SA) for nuclear power plant operators

  4. Guidelines for reporting quantitative mass spectrometry based experiments in proteomics.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Deutsch, Eric W; Binz, Pierre-Alain; Jones, Andrew R; Eisenacher, Martin; Mayer, Gerhard; Campos, Alex; Canals, Francesc; Bech-Serra, Joan-Josep; Carrascal, Montserrat; Gay, Marina; Paradela, Alberto; Navajas, Rosana; Marcilla, Miguel; Hernáez, María Luisa; Gutiérrez-Blázquez, María Dolores; Velarde, Luis Felipe Clemente; Aloria, Kerman; Beaskoetxea, Jabier; Medina-Aunon, J Alberto; Albar, Juan P

    2013-12-16

    Mass spectrometry is already a well-established protein identification tool and recent methodological and technological developments have also made possible the extraction of quantitative data of protein abundance in large-scale studies. Several strategies for absolute and relative quantitative proteomics and the statistical assessment of quantifications are possible, each having specific measurements and therefore, different data analysis workflows. The guidelines for Mass Spectrometry Quantification allow the description of a wide range of quantitative approaches, including labeled and label-free techniques and also targeted approaches such as Selected Reaction Monitoring (SRM). The HUPO Proteomics Standards Initiative (HUPO-PSI) has invested considerable efforts to improve the standardization of proteomics data handling, representation and sharing through the development of data standards, reporting guidelines, controlled vocabularies and tooling. In this manuscript, we describe a key output from the HUPO-PSI-namely the MIAPE Quant guidelines, which have developed in parallel with the corresponding data exchange format mzQuantML [1]. The MIAPE Quant guidelines describe the HUPO-PSI proposal concerning the minimum information to be reported when a quantitative data set, derived from mass spectrometry (MS), is submitted to a database or as supplementary information to a journal. The guidelines have been developed with input from a broad spectrum of stakeholders in the proteomics field to represent a true consensus view of the most important data types and metadata, required for a quantitative experiment to be analyzed critically or a data analysis pipeline to be reproduced. It is anticipated that they will influence or be directly adopted as part of journal guidelines for publication and by public proteomics databases and thus may have an impact on proteomics laboratories across the world. This article is part of a Special Issue entitled: Standardization and

  5. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  6. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  7. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  8. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  9. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    International Nuclear Information System (INIS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-01-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution

  10. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...

  11. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    Science.gov (United States)

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    Science.gov (United States)

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Evaluation of recent quantitative magnetospheric magnetic field models

    International Nuclear Information System (INIS)

    Walker, R.J.

    1976-01-01

    Recent quantitative magnetospheric field models contain many features not found in earlier models. Magnetopause models which include the effects of the dipole tilt were presented. More realistic models of the tail field include tail currents which close on the magnetopause, cross-tail currents of finite thickness, and cross-tail current models which model the position of the neutral sheet as a function of tilt. Finally, models have attempted to calculate the field of currents distributed in the inner magnetosphere. As the purpose of a magnetospheric model is to provide a mathematical description of the field that reasonably reproduces the observed magnetospheric field, several recent models were compared with the observed ΔB(B/sub observed/--B/sub main field/) contours. Models containing only contributions from magnetopause and tail current systems are able to reproduce the observed quiet time field only in an extremely qualitative way. The best quantitative agreement between models and observations occurs when currents distributed in the inner magnetosphere are added to the magnetopause and tail current systems. However, the distributed current models are valid only for zero tilt. Even the models which reproduce the average observed field reasonably well may not give physically reasonable field gradients. Three of the models evaluated contain regions in the near tail in which the field gradient reverses direction. One region in which all the models fall short is that around the polar cusp, though most can be used to calculate the position of the last closed field line reasonably well

  14. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting pa...

  15. Statistical aspects of quantitative real-time PCR experiment design.

    Science.gov (United States)

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  16. Quantitative assessment of biological impact using transcriptomic data and mechanistic network models

    International Nuclear Information System (INIS)

    Thomson, Ty M.; Sewer, Alain; Martin, Florian; Belcastro, Vincenzo; Frushour, Brian P.; Gebel, Stephan; Park, Jennifer; Schlage, Walter K.; Talikka, Marja; Vasilyev, Dmitry M.; Westra, Jurjen W.; Hoeng, Julia; Peitsch, Manuel C.

    2013-01-01

    Exposure to biologically active substances such as therapeutic drugs or environmental toxicants can impact biological systems at various levels, affecting individual molecules, signaling pathways, and overall cellular processes. The ability to derive mechanistic insights from the resulting system responses requires the integration of experimental measures with a priori knowledge about the system and the interacting molecules therein. We developed a novel systems biology-based methodology that leverages mechanistic network models and transcriptomic data to quantitatively assess the biological impact of exposures to active substances. Hierarchically organized network models were first constructed to provide a coherent framework for investigating the impact of exposures at the molecular, pathway and process levels. We then validated our methodology using novel and previously published experiments. For both in vitro systems with simple exposure and in vivo systems with complex exposures, our methodology was able to recapitulate known biological responses matching expected or measured phenotypes. In addition, the quantitative results were in agreement with experimental endpoint data for many of the mechanistic effects that were assessed, providing further objective confirmation of the approach. We conclude that our methodology evaluates the biological impact of exposures in an objective, systematic, and quantifiable manner, enabling the computation of a systems-wide and pan-mechanistic biological impact measure for a given active substance or mixture. Our results suggest that various fields of human disease research, from drug development to consumer product testing and environmental impact analysis, could benefit from using this methodology. - Highlights: • The impact of biologically active substances is quantified at multiple levels. • The systems-level impact integrates the perturbations of individual networks. • The networks capture the relationships between

  17. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  18. A Review on Quantitative Models for Sustainable Food Logistics Management

    Directory of Open Access Journals (Sweden)

    M. Soysal

    2012-12-01

    Full Text Available The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM. A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics.

  19. A quantitative phase field model for hydride precipitation in zirconium alloys: Part I. Development of quantitative free energy functional

    International Nuclear Information System (INIS)

    Shi, San-Qiang; Xiao, Zhihua

    2015-01-01

    A temperature dependent, quantitative free energy functional was developed for the modeling of hydride precipitation in zirconium alloys within a phase field scheme. The model takes into account crystallographic variants of hydrides, interfacial energy between hydride and matrix, interfacial energy between hydrides, elastoplastic hydride precipitation and interaction with externally applied stress. The model is fully quantitative in real time and real length scale, and simulation results were compared with limited experimental data available in the literature with a reasonable agreement. The work calls for experimental and/or theoretical investigations of some of the key material properties that are not yet available in the literature

  20. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage

  1. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    and directions of this research area, this paper provides a content analysis of 134 carefully identified papers on quantitative, formal models that address sustainability aspects in the forward SC. It was found that a preponderance of the publications and models appeared in a limited set of six journals......Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  2. Performance Theories for Sentence Coding: Some Quantitative Models

    Science.gov (United States)

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  3. A Quantitative bgl Operon Model for E. coli Requires BglF Conformational Change for Sugar Transport

    Science.gov (United States)

    Chopra, Paras; Bender, Andreas

    The bgl operon is responsible for the metabolism of β-glucoside sugars such as salicin or arbutin in E. coli. Its regulatory system involves both positive and negative feedback mechanisms and it can be assumed to be more complex than that of the more closely studied lac and trp operons. We have developed a quantitative model for the regulation of the bgl operon which is subject to in silico experiments investigating its behavior under different hypothetical conditions. Upon administration of 5mM salicin as an inducer our model shows 80-fold induction, which compares well with the 60-fold induction measured experimentally. Under practical conditions 5-10mM inducer are employed, which is in line with the minimum inducer concentration of 1mM required by our model. The necessity of BglF conformational change for sugar transport has been hypothesized previously, and in line with those hypotheses our model shows only minor induction if conformational change is not allowed. Overall, this first quantitative model for the bgl operon gives reasonable predictions that are close to experimental results (where measured). It will be further refined as values of the parameters are determined experimentally. The model was developed in Systems Biology Markup Language (SBML) and it is available from the authors and from the Biomodels repository [www.ebi.ac.uk/biomodels].

  4. Methodologies for quantitative systems pharmacology (QSP) models : Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, Hp; Agoram, B.; Davies, M.R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, Ph.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  5. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  6. Being a quantitative interviewer: qualitatively exploring interviewers' experiences in a longitudinal cohort study

    Directory of Open Access Journals (Sweden)

    Derrett Sarah

    2011-12-01

    Full Text Available Abstract Background Many studies of health outcomes rely on data collected by interviewers administering highly-structured (quantitative questionnaires to participants. Little appears to be known about the experiences of such interviewers. This paper explores interviewer experiences of working on a longitudinal study in New Zealand (the Prospective Outcomes of injury Study - POIS. Interviewers administer highly-structured questionnaires to participants, usually by telephone, and enter data into a secure computer program. The research team had expectations of interviewers including: consistent questionnaire administration, timeliness, proportions of potential participants recruited and an empathetic communication style. This paper presents results of a focus group to qualitatively explore with the team of interviewers their experiences, problems encountered, strategies, support systems used and training. Methods A focus group with interviewers involved in the POIS interviews was held; it was audio-recorded and transcribed. The analytical method was thematic, with output intended to be descriptive and interpretive. Results Nine interviewers participated in the focus group (average time in interviewer role was 31 months. Key themes were: 1 the positive aspects of the quantitative interviewer role (i.e. relationships and resilience, insights gained, and participants' feedback, 2 difficulties interviewers encountered and solutions identified (i.e. stories lost or incomplete, forgotten appointments, telling the stories, acknowledging distress, stories reflected and debriefing and support, and 3 meeting POIS researcher expectations (i.e. performance standards, time-keeping, dealing exclusively with the participant and maintaining privacy. Conclusions Interviewers demonstrated great skill in the way they negotiated research team expectations whilst managing the relationships with participants. Interviewers found it helpful to have a research protocol in

  7. Quantitative laser diagnostic and modeling study of C2 and CH chemistry in combustion.

    Science.gov (United States)

    Köhler, Markus; Brockhinke, Andreas; Braun-Unkhoff, Marina; Kohse-Höinghaus, Katharina

    2010-04-15

    Quantitative concentration measurements of CH and C(2) have been performed in laminar, premixed, flat flames of propene and cyclopentene with varying stoichiometry. A combination of cavity ring-down (CRD) spectroscopy and laser-induced fluorescence (LIF) was used to enable sensitive detection of these species with high spatial resolution. Previously, CH and C(2) chemistry had been studied, predominantly in methane flames, to understand potential correlations of their formation and consumption. For flames of larger hydrocarbon fuels, however, quantitative information on these small intermediates is scarce, especially under fuel-rich conditions. Also, the combustion chemistry of C(2) in particular has not been studied in detail, and although it has often been observed, its role in potential build-up reactions of higher hydrocarbon species is not well understood. The quantitative measurements performed here are the first to detect both species with good spatial resolution and high sensitivity in the same experiment in flames of C(3) and C(5) fuels. The experimental profiles were compared with results of combustion modeling to reveal details of the formation and consumption of these important combustion molecules, and the investigation was devoted to assist the further understanding of the role of C(2) and of its potential chemical interdependences with CH and other small radicals.

  8. Plutonium chemistry: a synthesis of experimental data and a quantitative model for plutonium oxide solubility

    International Nuclear Information System (INIS)

    Haschke, J.M.; Oversby, V.M.

    2002-01-01

    The chemistry of plutonium is important for assessing potential behavior of radioactive waste under conditions of geologic disposal. This paper reviews experimental data on dissolution of plutonium oxide solids, describes a hybrid kinetic-equilibrium model for predicting steady-state Pu concentrations, and compares laboratory results with predicted Pu concentrations and oxidation-state distributions. The model is based on oxidation of PuO 2 by water to produce PuO 2+x , an oxide that can release Pu(V) to solution. Kinetic relationships between formation of PuO 2+x , dissolution of Pu(V), disproportionation of Pu(V) to Pu(IV) and Pu(VI), and reduction of Pu(VI) are given and used in model calculations. Data from tests of pyrochemical salt wastes in brines are discussed and interpreted using the conceptual model. Essential data for quantitative modeling at conditions relevant to nuclear waste repositories are identified and laboratory experiments to determine rate constants for use in the model are discussed

  9. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    Science.gov (United States)

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers

  10. The Quantitative Preparation of Future Geoscience Graduate Students

    Science.gov (United States)

    Manduca, C. A.; Hancock, G. S.

    2006-12-01

    Modern geoscience is a highly quantitative science. In February, a small group of faculty and graduate students from across the country met to discuss the quantitative preparation of geoscience majors for graduate school. The group included ten faculty supervising graduate students in quantitative areas spanning the earth, atmosphere, and ocean sciences; five current graduate students in these areas; and five faculty teaching undergraduate students in the spectrum of institutions preparing students for graduate work. Discussion focused in four key ares: Are incoming graduate students adequately prepared for the quantitative aspects of graduate geoscience programs? What are the essential quantitative skills are that are required for success in graduate school? What are perceived as the important courses to prepare students for the quantitative aspects of graduate school? What programs/resources would be valuable in helping faculty/departments improve the quantitative preparation of students? The participants concluded that strengthening the quantitative preparation of undergraduate geoscience majors would increase their opportunities in graduate school. While specifics differed amongst disciplines, a special importance was placed on developing the ability to use quantitative skills to solve geoscience problems. This requires the ability to pose problems so they can be addressed quantitatively, understand the relationship between quantitative concepts and physical representations, visualize mathematics, test the reasonableness of quantitative results, creatively move forward from existing models/techniques/approaches, and move between quantitative and verbal descriptions. A list of important quantitative competencies desirable in incoming graduate students includes mechanical skills in basic mathematics, functions, multi-variate analysis, statistics and calculus, as well as skills in logical analysis and the ability to learn independently in quantitative ways

  11. Determination of Calcium in Cereal with Flame Atomic Absorption Spectroscopy: An Experiment for a Quantitative Methods of Analysis Course

    Science.gov (United States)

    Bazzi, Ali; Kreuz, Bette; Fischer, Jeffrey

    2004-01-01

    An experiment for determination of calcium in cereal using two-increment standard addition method in conjunction with flame atomic absorption spectroscopy (FAAS) is demonstrated. The experiment is intended to introduce students to the principles of atomic absorption spectroscopy giving them hands on experience using quantitative methods of…

  12. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  13. On the usability of quantitative modelling in operations strategy decission making

    NARCIS (Netherlands)

    Akkermans, H.A.; Bertrand, J.W.M.

    1997-01-01

    Quantitative modelling seems admirably suited to help managers in their strategic decision making on operations management issues, but in practice models are rarely used for this purpose. Investigates the reasons why, based on a detailed cross-case analysis of six cases of modelling-supported

  14. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong

    2013-01-01

    Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  15. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  16. Quantitative analysis of a wind energy conversion model

    International Nuclear Information System (INIS)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-01-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s −1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is c p = 0.15. The v 3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively. (paper)

  17. Quantitative Systems Pharmacology: A Case for Disease Models.

    Science.gov (United States)

    Musante, C J; Ramanujan, S; Schmidt, B J; Ghobrial, O G; Lu, J; Heatherington, A C

    2017-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model-informed drug discovery and development, supporting program decisions from exploratory research through late-stage clinical trials. In this commentary, we discuss the unique value of disease-scale "platform" QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.

  18. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  19. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  20. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Science.gov (United States)

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  1. Can't Count or Won't Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment.

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2016-06-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through 'doing' quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a 'magic bullet' and that a wider programme of content and assessment diversification across the curriculum is preferential.

  2. The life review experience: Qualitative and quantitative characteristics.

    Science.gov (United States)

    Katz, Judith; Saadon-Grosman, Noam; Arzy, Shahar

    2017-02-01

    The life-review experience (LRE) is a most intriguing mental phenomenon that fascinated humans from time immemorial. In LRE one sees vividly a succession of one's own life-events. While reports of LRE are abundant in the medical, psychological and popular literature, not much is known about LRE's cognitive and psychological basis. Moreover, while LRE is known as part of the phenomenology of near-death experience, its manifestation in the general population and in other circumstances is still to be investigated. In a first step we studied the phenomenology of LRE by means of in-depth qualitative interview of 7 people who underwent full LRE. In a second step we extracted the main characters of LRE, to develop a questionnaire and an LRE-score that best reflects LRE phenomenology. This questionnaire was then run on 264 participants of diverse ages and backgrounds, and the resulted score was further subjected to statistical analyses. Qualitative analysis showed the LRE to manifest several subtypes of characteristics in terms of order, continuity, the covered period, extension to the future, valence, emotions, and perspective taking. Quantitative results in the normal population showed normal distribution of the LRE-score over participants. Re-experiencing one's own life-events, so-called LRE, is a phenomenon with well-defined characteristics, and its subcomponents may be also evident in healthy people. This suggests that a representation of life-events as a continuum exists in the cognitive system, and maybe further expressed in extreme conditions of psychological and physiological stress. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Science.gov (United States)

    2012-07-17

    ... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...] Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A... Influenza Disease Models to Quantitatively Evaluate the Benefits and Risks of Vaccines: A Technical Workshop...

  4. Expert judgement models in quantitative risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rosqvist, T. [VTT Automation, Helsinki (Finland); Tuominen, R. [VTT Automation, Tampere (Finland)

    1999-12-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed.

  5. Expert judgement models in quantitative risk assessment

    International Nuclear Information System (INIS)

    Rosqvist, T.; Tuominen, R.

    1999-01-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed

  6. Quantitative model of New Zealand's energy supply industry

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B. R. [Victoria Univ., Wellington, (New Zealand); Lucas, P. D. [Ministry of Energy Resources (New Zealand)

    1977-10-15

    A mathematical model is presented to assist in an analysis of energy policy options available. The model is based on an engineering orientated description of New Zealand's energy supply and distribution system. The system is cast as a linear program, in which energy demand is satisfied at least cost. The capacities and operating modes of process plant (such as power stations, oil refinery units, and LP-gas extraction plants) are determined by the model, as well as the optimal mix of fuels supplied to the final consumers. Policy analysis with the model enables a wide ranging assessment of the alternatives and uncertainties within a consistent quantitative framework. It is intended that the model be used as a tool to investigate the relative effects of various policy options, rather than to present a definitive plan for satisfying the nation's energy requirements.

  7. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  8. ITER transient consequences for material damage: modelling versus experiments

    International Nuclear Information System (INIS)

    Bazylev, B; Janeschitz, G; Landman, I; Pestchanyi, S; Loarte, A; Federici, G; Merola, M; Linke, J; Zhitlukhin, A; Podkovyrov, V; Klimov, N; Safronov, V

    2007-01-01

    Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed

  9. Quantitative analysis chemistry

    International Nuclear Information System (INIS)

    Ko, Wansuk; Lee, Choongyoung; Jun, Kwangsik; Hwang, Taeksung

    1995-02-01

    This book is about quantitative analysis chemistry. It is divided into ten chapters, which deal with the basic conception of material with the meaning of analysis chemistry and SI units, chemical equilibrium, basic preparation for quantitative analysis, introduction of volumetric analysis, acid-base titration of outline and experiment examples, chelate titration, oxidation-reduction titration with introduction, titration curve, and diazotization titration, precipitation titration, electrometric titration and quantitative analysis.

  10. Can’t Count or Won’t Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2015-01-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through ‘doing’ quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a ‘magic bullet’ and that a wider programme of content and assessment diversification across the curriculum is preferential. PMID:27330225

  11. Quantitative modeling of gene networks of biological systems using fuzzy Petri nets and fuzzy sets

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2018-01-01

    Full Text Available Quantitative demonstrating of organic frameworks has turned into an essential computational methodology in the configuration of novel and investigation of existing natural frameworks. Be that as it may, active information that portrays the framework's elements should be known keeping in mind the end goal to get pertinent results with the routine displaying strategies. This information is frequently robust or even difficult to get. Here, we exhibit a model of quantitative fuzzy rational demonstrating approach that can adapt to obscure motor information and hence deliver applicable results despite the fact that dynamic information is fragmented or just dubiously characterized. Besides, the methodology can be utilized as a part of the blend with the current cutting edge quantitative demonstrating strategies just in specific parts of the framework, i.e., where the data are absent. The contextual analysis of the methodology suggested in this paper is performed on the model of nine-quality genes. We propose a kind of FPN model in light of fuzzy sets to manage the quantitative modeling of biological systems. The tests of our model appear that the model is practical and entirely powerful for information impersonation and thinking of fuzzy expert frameworks.

  12. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    Science.gov (United States)

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.

  13. Modeling plant interspecific interactions from experiments with perennial crop mixtures to predict optimal combinations.

    Science.gov (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-12-01

    The contribution of plant species richness to productivity and ecosystem functioning is a longstanding issue in ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modeling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modeled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e., a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficients- from, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modeling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. © 2017 by the Ecological Society of America.

  14. Qualitative and quantitative combined nonlinear dynamics model and its application in analysis of price, supply–demand ratio and selling rate

    International Nuclear Information System (INIS)

    Zhu, Dingju

    2016-01-01

    The qualitative and quantitative combined nonlinear dynamics model proposed in this paper fill the gap in nonlinear dynamics model in terms of qualitative and quantitative combined methods, allowing the qualitative model and quantitative model to perfectly combine and overcome their weaknesses by learning from each other. These two types of models use their strengths to make up for the other’s deficiencies. The qualitative and quantitative combined models can surmount the weakness that the qualitative model cannot be applied and verified in a quantitative manner, and the high costs and long time of multiple construction as well as verification of the quantitative model. The combined model is more practical and efficient, which is of great significance for nonlinear dynamics. The qualitative and quantitative combined modeling and model analytical method raised in this paper is not only applied to nonlinear dynamics, but can be adopted and drawn on in the modeling and model analysis of other fields. Additionally, the analytical method of qualitative and quantitative combined nonlinear dynamics model proposed in this paper can satisfactorily resolve the problems with the price system’s existing nonlinear dynamics model analytical method. The three-dimensional dynamics model of price, supply–demand ratio and selling rate established in this paper make estimates about the best commodity prices using the model results, thereby providing a theoretical basis for the government’s macro-control of price. Meanwhile, this model also offer theoretical guidance to how to enhance people’s purchasing power and consumption levels through price regulation and hence to improve people’s living standards.

  15. A Colorimetric Analysis Experiment Not Requiring a Spectrophotometer: Quantitative Determination of Albumin in Powdered Egg White

    Science.gov (United States)

    Charlton, Amanda K.; Sevcik, Richard S.; Tucker, Dorie A.; Schultz, Linda D.

    2007-01-01

    A general science experiment for high school chemistry students might serve as an excellent review of the concepts of solution preparation, solubility, pH, and qualitative and quantitative analysis of a common food product. The students could learn to use safe laboratory techniques, collect and analyze data using proper scientific methodology and…

  16. Quantitative study of Portland cement hydration by X-Ray diffraction/Rietveld analysis and geochemical modeling

    Science.gov (United States)

    Coutelot, F.; Seaman, J. C.; Simner, S.

    2017-12-01

    In this study the hydration of Portland cements containing blast-furnace slag and type V fly ash were investigated during cement curing using X-ray diffraction, with geochemical modeling used to calculate the total volume of hydrates. The goal was to evaluate the relationship between the starting component levels and the hydrate assemblages that develop during the curing process. Blast furnace-slag levels of 60, 45 and 30 wt.% were studied in blends containing fly ash and Portland cement. Geochemical modelling described the dissolution of the clinker, and predicted quantitatively the amount of hydrates. In all cases the experiments showed the presence of C-S-H, portlandite and ettringite. The quantities of ettringite, portlandite and the amorphous phases as determined by XRD agreed well with the calculated amounts of these phases after different periods of time. These findings show that changes in the bulk composition of hydrating cements can be described by geochemical models. Such a comparison between experimental and modelled data helps to understand in more detail the active processes occurring during cement hydration.

  17. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  18. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  19. Surgical Simulations Based on Limited Quantitative Data: Understanding How Musculoskeletal Models Can Be Used to Predict Moment Arms and Guide Experimental Design.

    Directory of Open Access Journals (Sweden)

    Jennifer A Nichols

    Full Text Available The utility of biomechanical models and simulations to examine clinical problems is currently limited by the need for extensive amounts of experimental data describing how a given procedure or disease affects the musculoskeletal system. Methods capable of predicting how individual biomechanical parameters are altered by surgery are necessary for the efficient development of surgical simulations. In this study, we evaluate to what extent models based on limited amounts of quantitative data can be used to predict how surgery influences muscle moment arms, a critical parameter that defines how muscle force is transformed into joint torque. We specifically examine proximal row carpectomy and scaphoid-excision four-corner fusion, two common surgeries to treat wrist osteoarthritis. Using models of these surgeries, which are based on limited data and many assumptions, we perform simulations to formulate a hypothesis regarding how these wrist surgeries influence muscle moment arms. Importantly, the hypothesis is based on analysis of only the primary wrist muscles. We then test the simulation-based hypothesis using a cadaveric experiment that measures moment arms of both the primary wrist and extrinsic thumb muscles. The measured moment arms of the primary wrist muscles are used to verify the hypothesis, while those of the extrinsic thumb muscles are used as cross-validation to test whether the hypothesis is generalizable. The moment arms estimated by the models and measured in the cadaveric experiment both indicate that a critical difference between the surgeries is how they alter radial-ulnar deviation versus flexion-extension moment arms at the wrist. Thus, our results demonstrate that models based on limited quantitative data can provide novel insights. This work also highlights that synergistically utilizing simulation and experimental methods can aid the design of experiments and make it possible to test the predictive limits of current computer

  20. Interdiffusion of the aluminum magnesium system. Quantitative analysis and numerical model; Interdiffusion des Aluminium-Magnesium-Systems. Quantitative Analyse und numerische Modellierung

    Energy Technology Data Exchange (ETDEWEB)

    Seperant, Florian

    2012-03-21

    Aluminum coatings are a promising approach to protect magnesium alloys against corrosion and thereby making them accessible to a variety of technical applications. Thermal treatment enhances the adhesion of the aluminium coating on magnesium by interdiffusion. For a deeper understanding of the diffusion process at the interface, a quantitative description of the Al-Mg system is necessary. On the basis of diffusion experiments with infinite reservoirs of aluminum and magnesium, the interdiffusion coefficients of the intermetallic phases of the Al-Mg-system are calculated with the Sauer-Freise method for the first time. To solve contradictions in the literature concerning the intrinsic diffusion coefficients, the possibility of a bifurcation of the Kirkendall plane is considered. Furthermore, a physico-chemical description of interdiffusion is provided to interpret the observed phase transitions. The developed numerical model is based on a temporally varied discretization of the space coordinate. It exhibits excellent quantitative agreement with the experimentally measured concentration profile. This confirms the validity of the obtained diffusion coefficients. Moreover, the Kirkendall shift in the Al-Mg system is simulated for the first time. Systems with thin aluminum coatings on magnesium also exhibit a good correlation between simulated and experimental concentration profiles. Thus, the diffusion coefficients are also valid for Al-coated systems. Hence, it is possible to derive parameters for a thermal treatment by simulation, resulting in an optimized modification of the magnesium surface for technical applications.

  1. A quantitative risk-based model for reasoning over critical system properties

    Science.gov (United States)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  2. Tip-Enhanced Raman Voltammetry: Coverage Dependence and Quantitative Modeling.

    Science.gov (United States)

    Mattei, Michael; Kang, Gyeongwon; Goubert, Guillaume; Chulhai, Dhabih V; Schatz, George C; Jensen, Lasse; Van Duyne, Richard P

    2017-01-11

    Electrochemical atomic force microscopy tip-enhanced Raman spectroscopy (EC-AFM-TERS) was employed for the first time to observe nanoscale spatial variations in the formal potential, E 0' , of a surface-bound redox couple. TERS cyclic voltammograms (TERS CVs) of single Nile Blue (NB) molecules were acquired at different locations spaced 5-10 nm apart on an indium tin oxide (ITO) electrode. Analysis of TERS CVs at different coverages was used to verify the observation of single-molecule electrochemistry. The resulting TERS CVs were fit to the Laviron model for surface-bound electroactive species to quantitatively extract the formal potential E 0' at each spatial location. Histograms of single-molecule E 0' at each coverage indicate that the electrochemical behavior of the cationic oxidized species is less sensitive to local environment than the neutral reduced species. This information is not accessible using purely electrochemical methods or ensemble spectroelectrochemical measurements. We anticipate that quantitative modeling and measurement of site-specific electrochemistry with EC-AFM-TERS will have a profound impact on our understanding of the role of nanoscale electrode heterogeneity in applications such as electrocatalysis, biological electron transfer, and energy production and storage.

  3. Genomic value prediction for quantitative traits under the epistatic model

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-01-01

    Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.

  4. [A quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse].

    Science.gov (United States)

    Zhang, Yu; Chen, Yuzhen; Hu, Chunguang; Zhang, Huaning; Bi, Zhenwang; Bi, Zhenqiang

    2015-05-01

    To construct a quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse and to find out effective interventions to reduce salmonella contamination. We constructed a modular process risk model (MPRM) from evisceration to chilling in Excel Sheet using the data of the process parameters in poultry and the Salmomella concentration surveillance of Jinan in 2012. The MPRM was simulated by @ risk software. The concentration of salmonella on carcass after chilling was 1.96MPN/g which was calculated by model. The sensitive analysis indicated that the correlation coefficient of the concentration of salmonella after defeathering and in chilling pool were 0.84 and 0.34,which were the primary factors to the concentration of salmonella on carcass after chilling. The study provided a quantitative assessment model structure for salmonella on carcass in poultry slaughterhouse. The risk manager could control the contamination of salmonella on carcass after chilling by reducing the concentration of salmonella after defeathering and in chilling pool.

  5. A semi-quantitative model for risk appreciation and risk weighing

    DEFF Research Database (Denmark)

    Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko

    2009-01-01

    Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... detailed information on the estimated health impact in a given exposure situation. These graphs will facilitate the discussions on appropriate risk reduction measures to be taken....

  6. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  7. Characteristics of the Nordic Seas overflows in a set of Norwegian Earth System Model experiments

    Science.gov (United States)

    Guo, Chuncheng; Ilicak, Mehmet; Bentsen, Mats; Fer, Ilker

    2016-08-01

    Global ocean models with an isopycnic vertical coordinate are advantageous in representing overflows, as they do not suffer from topography-induced spurious numerical mixing commonly seen in geopotential coordinate models. In this paper, we present a quantitative diagnosis of the Nordic Seas overflows in four configurations of the Norwegian Earth System Model (NorESM) family that features an isopycnic ocean model. For intercomparison, two coupled ocean-sea ice and two fully coupled (atmosphere-land-ocean-sea ice) experiments are considered. Each pair consists of a (non-eddying) 1° and a (eddy-permitting) 1/4° horizontal resolution ocean model. In all experiments, overflow waters remain dense and descend to the deep basins, entraining ambient water en route. Results from the 1/4° pair show similar behavior in the overflows, whereas the 1° pair show distinct differences, including temperature/salinity properties, volume transport (Q), and large scale features such as the strength of the Atlantic Meridional Overturning Circulation (AMOC). The volume transport of the overflows and degree of entrainment are underestimated in the 1° experiments, whereas in the 1/4° experiments, there is a two-fold downstream increase in Q, which matches observations well. In contrast to the 1/4° experiments, the coarse 1° experiments do not capture the inclined isopycnals of the overflows or the western boundary current off the Flemish Cap. In all experiments, the pathway of the Iceland-Scotland Overflow Water is misrepresented: a major fraction of the overflow proceeds southward into the West European Basin, instead of turning westward into the Irminger Sea. This discrepancy is attributed to excessive production of Labrador Sea Water in the model. The mean state and variability of the Nordic Seas overflows have significant consequences on the response of the AMOC, hence their correct representations are of vital importance in global ocean and climate modelling.

  8. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  9. iQuantitator: A tool for protein expression inference using iTRAQ

    Directory of Open Access Journals (Sweden)

    Comte-Walters Susana

    2009-10-01

    Full Text Available Abstract Background Isobaric Tags for Relative and Absolute Quantitation (iTRAQ™ [Applied Biosystems] have seen increased application in differential protein expression analysis. To facilitate the growing need to analyze iTRAQ data, especially for cases involving multiple iTRAQ experiments, we have developed a modeling approach, statistical methods, and tools for estimating the relative changes in protein expression under various treatments and experimental conditions. Results This modeling approach provides a unified analysis of data from multiple iTRAQ experiments and links the observed quantity (reporter ion peak area to the experiment design and the calculated quantity of interest (treatment-dependent protein and peptide fold change through an additive model under log transformation. Others have demonstrated, through a case study, this modeling approach and noted the computational challenges of parameter inference in the unbalanced data set typical of multiple iTRAQ experiments. Here we present the development of an inference approach, based on hierarchical regression with batching of regression coefficients and Markov Chain Monte Carlo (MCMC methods that overcomes some of these challenges. In addition to our discussion of the underlying method, we also present our implementation of the software, simulation results, experimental results, and sample output from the resulting analysis report. Conclusion iQuantitator's process-based modeling approach overcomes limitations in current methods and allows for application in a variety of experimental designs. Additionally, hypertext-linked documents produced by the tool aid in the interpretation and exploration of results.

  10. The Design of a Quantitative Western Blot Experiment

    Directory of Open Access Journals (Sweden)

    Sean C. Taylor

    2014-01-01

    Full Text Available Western blotting is a technique that has been in practice for more than three decades that began as a means of detecting a protein target in a complex sample. Although there have been significant advances in both the imaging and reagent technologies to improve sensitivity, dynamic range of detection, and the applicability of multiplexed target detection, the basic technique has remained essentially unchanged. In the past, western blotting was used simply to detect a specific target protein in a complex mixture, but now journal editors and reviewers are requesting the quantitative interpretation of western blot data in terms of fold changes in protein expression between samples. The calculations are based on the differential densitometry of the associated chemiluminescent and/or fluorescent signals from the blots and this now requires a fundamental shift in the experimental methodology, acquisition, and interpretation of the data. We have recently published an updated approach to produce quantitative densitometric data from western blots (Taylor et al., 2013 and here we summarize the complete western blot workflow with a focus on sample preparation and data analysis for quantitative western blotting.

  11. Absolute quantitation of myocardial blood flow with {sup 201}Tl and dynamic SPECT in canine: optimisation and validation of kinetic modelling

    Energy Technology Data Exchange (ETDEWEB)

    Iida, Hidehiro; Kim, Kyeong-Min; Nakazawa, Mayumi; Sohlberg, Antti; Zeniya, Tsutomu; Hayashi, Takuya; Watabe, Hiroshi [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita City, Osaka (Japan); Eberl, Stefan [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita City, Osaka (Japan); Royal Prince Alfred Hospital, PET and Nuclear Medicine Department, Camperdown, NSW (Australia); Tamura, Yoshikazu [Akita Kumiai General Hospital, Department of Cardiology, Akita City (Japan); Ono, Yukihiko [Akita Research Institute of Brain, Akita City (Japan)

    2008-05-15

    {sup 201}Tl has been extensively used for myocardial perfusion and viability assessment. Unlike {sup 99m}Tc-labelled agents, such as {sup 99m}Tc-sestamibi and {sup 99m}Tc-tetrofosmine, the regional concentration of {sup 201}Tl varies with time. This study is intended to validate a kinetic modelling approach for in vivo quantitative estimation of regional myocardial blood flow (MBF) and volume of distribution of {sup 201}Tl using dynamic SPECT. Dynamic SPECT was carried out on 20 normal canines after the intravenous administration of {sup 201}Tl using a commercial SPECT system. Seven animals were studied at rest, nine during adenosine infusion, and four after beta-blocker administration. Quantitative images were reconstructed with a previously validated technique, employing OS-EM with attenuation-correction, and transmission-dependent convolution subtraction scatter correction. Measured regional time-activity curves in myocardial segments were fitted to two- and three-compartment models. Regional MBF was defined as the influx rate constant (K{sub 1}) with corrections for the partial volume effect, haematocrit and limited first-pass extraction fraction, and was compared with that determined from radio-labelled microspheres experiments. Regional time-activity curves responded well to pharmacological stress. Quantitative MBF values were higher with adenosine and decreased after beta-blocker compared to a resting condition. MBFs obtained with SPECT (MBF{sub SPECT}) correlated well with the MBF values obtained by the radio-labelled microspheres (MBF{sub MS}) (MBF{sub SPECT} = -0.067 + 1.042 x MBF{sub MS}, p < 0.001). The three-compartment model provided better fit than the two-compartment model, but the difference in MBF values between the two methods was small and could be accounted for with a simple linear regression. Absolute quantitation of regional MBF, for a wide physiological flow range, appears to be feasible using {sup 201}Tl and dynamic SPECT. (orig.)

  12. Gestalt isomorphism and the primacy of subjective conscious experience: a Gestalt Bubble model.

    Science.gov (United States)

    Lehar, Steven

    2003-08-01

    A serious crisis is identified in theories of neurocomputation, marked by a persistent disparity between the phenomenological or experiential account of visual perception and the neurophysiological level of description of the visual system. In particular, conventional concepts of neural processing offer no explanation for the holistic global aspects of perception identified by Gestalt theory. The problem is paradigmatic and can be traced to contemporary concepts of the functional role of the neural cell, known as the Neuron Doctrine. In the absence of an alternative neurophysiologically plausible model, I propose a perceptual modeling approach, to model the percept as experienced subjectively, rather than modeling the objective neurophysiological state of the visual system that supposedly subserves that experience. A Gestalt Bubble model is presented to demonstrate how the elusive Gestalt principles of emergence, reification, and invariance can be expressed in a quantitative model of the subjective experience of visual consciousness. That model in turn reveals a unique computational strategy underlying visual processing, which is unlike any algorithm devised by man, and certainly unlike the atomistic feed-forward model of neurocomputation offered by the Neuron Doctrine paradigm. The perceptual modeling approach reveals the primary function of perception as that of generating a fully spatial virtual-reality replica of the external world in an internal representation. The common objections to this "picture-in-the-head" concept of perceptual representation are shown to be ill founded.

  13. Quantitative aspects and dynamic modelling of glucosinolate metabolism

    DEFF Research Database (Denmark)

    Vik, Daniel

    . This enables comparison of transcript and protein levels across mutants and upon induction. I find that unchallenged plants show good correspondence between protein and transcript, but that treatment with methyljasmonate results in significant differences (chapter 1). Functional genomics are used to study......). The construction a dynamic quantitative model of GLS hydrolysis is described. Simulations reveal potential effects on auxin signalling that could reflect defensive strategies (chapter 4). The results presented grant insights into, not only the dynamics of GLS biosynthesis and hydrolysis, but also the relationship...

  14. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  15. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  16. A benefit–risk assessment model for statins using multicriteria decision analysis based on a discrete choice experiment in Korean patients

    Directory of Open Access Journals (Sweden)

    Byun JH

    2016-06-01

    Full Text Available Ji-Hye Byun,1 Sun-Hong Kwon,1 Ji-Hye Ha,2 Eui-Kyung Lee1 1School of Pharmacy, Sungkyunkwan University, Suwon-si, Gyeonggi-do, 2Ministry of Food and Drug Safety, Cheongju-si, Chungcheongbuk-do, South Korea Purpose: The benefit–risk balance for drugs can alter post approval owing to additional data on efficacy or adverse events. This study developed a quantitative benefit–risk assessment (BRA model for statins using multicriteria decision analysis with discrete choice experiments and compared a recent BRA with that at the time of approval. Patients and methods: Following a systematic review of the literature, the benefit criteria within the statin BRA model were defined as a reduction in the plasma low-density lipoprotein cholesterol level and a reduction in myocardial infarction incidence; the risk criteria were hepatotoxicity (Liv and fatal rhabdomyolysis (Rha. The scores for these criteria were estimated using mixed treatment comparison methods. Weighting was calculated from a discrete choice experiment involving 203 Korean patients. The scores and weights were integrated to produce an overall value representing the benefit–risk balance, and sensitivity analyses were conducted. Results: In this BRA model, low-density lipoprotein (relative importance [RI]: 37.50% was found to be a more important benefit criterion than myocardial infarction (RI: 35.43%, and Liv (RI: 16.28% was a more important risk criterion than Rha (RI: 10.79%. Patients preferred atorvastatin, and the preference ranking of cerivastatin and simvastatin was switched post approval because of the emergence of additional risk information related to cerivastatin. Conclusion: A quantitative statin BRA model confirmed that the preference ranking of statins changed post approval because of the identification of additional benefits or risks. Keywords: multicriteria decision analysis, statin, quantitative benefit–risk assessment, discrete choice experiment

  17. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    Science.gov (United States)

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (pquantitative DCE-MRI, the quantitative DCE-MRI parameter K trans is a more sensitive and accurate index for detecting early reduced perfusion in osteoporotic bone.

  18. Biogeochemical Reactive Transport Model of the Redox Zone Experiment of the sp Hard Rock Laboratory in Sweden

    International Nuclear Information System (INIS)

    Molinero-Huguet, Jorge; Samper-Calvete, F. Javier; Zhang, Guoxiang; Yang, Changbing

    2004-01-01

    Underground facilities are being operated by several countries around the world for performing research and demonstration of the safety of deep radioactive waste repositories. The ''sp'' Hard Rock Laboratory is one such facility launched and operated by the Swedish Nuclear Fuel and Waste Management Company where various in situ experiments have been performed in fractured granites. One such experiment is the redox zone experiment, which aimed at evaluating the effects of the construction of an access tunnel on the hydrochemical conditions of a fracture zone. Dilution of the initially saline groundwater by fresh recharge water is the dominant process controlling the hydrochemical evolution of most chemical species, except for bicarbonate and sulfate, which unexpectedly increase with time. We present a numerical model of water flow, reactive transport, and microbial processes for the redox zone experiment. This model provides a plausible quantitatively based explanation for the unexpected evolution of bicarbonate and sulfate, reproduces the breakthrough curves of other reactive species, and is consistent with previous hydrogeological and solute transport models

  19. Adhesive behaviour of gecko-inspired nanofibrillar arrays: combination of experiments and finite element modelling

    International Nuclear Information System (INIS)

    Wang Zhengzhi; Xu Yun; Gu Ping

    2012-01-01

    A polypropylene nanofibrillar array was successfully fabricated by template-assisted nanofabrication strategy. Adhesion properties of this gecko-inspired structure were studied through two parallel and independent approaches: experiments and finite element simulations. Experimental results show relatively good normal adhesion, but accompanied by high preloads. The interfacial adhesion was modelled by effective spring elements with piecewise-linear constitution. The effective elasticity of the fibre-array system was originally calculated from our measured elasticity of single nanowire. Comparisons of the experimental and simulative results reveal quantitative agreement except for some explainable deviations, which suggests the potential applicability of the present models and applied theories. (fast track communication)

  20. Numerical modeling of plasma plume evolution against ambient background gas in laser blow off experiments

    International Nuclear Information System (INIS)

    Patel, Bhavesh G.; Das, Amita; Kaw, Predhiman; Singh, Rajesh; Kumar, Ajai

    2012-01-01

    Two dimensional numerical modelling based on simplified hydrodynamic evolution for an expanding plasma plume (created by laser blow off) against an ambient background gas has been carried out. A comparison with experimental observations shows that these simulations capture most features of the plasma plume expansion. The plume location and other gross features are reproduced as per the experimental observation in quantitative detail. The plume shape evolution and its dependence on the ambient background gas are in good qualitative agreement with the experiment. This suggests that a simplified hydrodynamic expansion model is adequate for the description of plasma plume expansion.

  1. Improving quantitative precipitation nowcasting with a local ensemble transform Kalman filter radar data assimilation system: observing system simulation experiments

    Directory of Open Access Journals (Sweden)

    Chih-Chien Tsai

    2014-03-01

    Full Text Available This study develops a Doppler radar data assimilation system, which couples the local ensemble transform Kalman filter with the Weather Research and Forecasting model. The benefits of this system to quantitative precipitation nowcasting (QPN are evaluated with observing system simulation experiments on Typhoon Morakot (2009, which brought record-breaking rainfall and extensive damage to central and southern Taiwan. The results indicate that the assimilation of radial velocity and reflectivity observations improves the three-dimensional winds and rain-mixing ratio most significantly because of the direct relations in the observation operator. The patterns of spiral rainbands become more consistent between different ensemble members after radar data assimilation. The rainfall intensity and distribution during the 6-hour deterministic nowcast are also improved, especially for the first 3 hours. The nowcasts with and without radar data assimilation have similar evolution trends driven by synoptic-scale conditions. Furthermore, we carry out a series of sensitivity experiments to develop proper assimilation strategies, in which a mixed localisation method is proposed for the first time and found to give further QPN improvement in this typhoon case.

  2. A quantitative and dynamic model of the Arabidopsis flowering time gene regulatory network.

    Directory of Open Access Journals (Sweden)

    Felipe Leal Valentim

    Full Text Available Various environmental signals integrate into a network of floral regulatory genes leading to the final decision on when to flower. Although a wealth of qualitative knowledge is available on how flowering time genes regulate each other, only a few studies incorporated this knowledge into predictive models. Such models are invaluable as they enable to investigate how various types of inputs are combined to give a quantitative readout. To investigate the effect of gene expression disturbances on flowering time, we developed a dynamic model for the regulation of flowering time in Arabidopsis thaliana. Model parameters were estimated based on expression time-courses for relevant genes, and a consistent set of flowering times for plants of various genetic backgrounds. Validation was performed by predicting changes in expression level in mutant backgrounds and comparing these predictions with independent expression data, and by comparison of predicted and experimental flowering times for several double mutants. Remarkably, the model predicts that a disturbance in a particular gene has not necessarily the largest impact on directly connected genes. For example, the model predicts that SUPPRESSOR OF OVEREXPRESSION OF CONSTANS (SOC1 mutation has a larger impact on APETALA1 (AP1, which is not directly regulated by SOC1, compared to its effect on LEAFY (LFY which is under direct control of SOC1. This was confirmed by expression data. Another model prediction involves the importance of cooperativity in the regulation of APETALA1 (AP1 by LFY, a prediction supported by experimental evidence. Concluding, our model for flowering time gene regulation enables to address how different quantitative inputs are combined into one quantitative output, flowering time.

  3. Quantitative determination of peripheral arterio-venous shunts by means of radioactively labelled microspheres

    International Nuclear Information System (INIS)

    Friese, K.H.

    1981-01-01

    In the present work a nuclear method of quantitative measurement of peripheral arterio-venous shunts with a whole-body scanner is standardized. This method, developed at the beginning of the 70s at Tuebingen, stands out in contrast with earlier measuring methods by the application of the theory of quantitative scintiscanning. This means that the scintigram obtained after injection of sup(99m)technetium-labelled human albumin microspheres into an artery before the shunt is corrected for the quantitative shunt calculation by several factors using a computer, to avoid systematic mistakes. For the standardization of the method, 182 scintigrams were taken during model experiments and experiments on animals and human beings. This method, having a deviation of 10% at most, is excellently suited for the quantitative determination of peripheral arterio-venous shunts. Already for a pulmonary activity of 3% a peripheral shunt is proved with 97.5% probability. (orig./MG) [de

  4. The effects of nutrition labeling on consumer food choice: a psychological experiment and computational model.

    Science.gov (United States)

    Helfer, Peter; Shultz, Thomas R

    2014-12-01

    The widespread availability of calorie-dense food is believed to be a contributing cause of an epidemic of obesity and associated diseases throughout the world. One possible countermeasure is to empower consumers to make healthier food choices with useful nutrition labeling. An important part of this endeavor is to determine the usability of existing and proposed labeling schemes. Here, we report an experiment on how four different labeling schemes affect the speed and nutritional value of food choices. We then apply decision field theory, a leading computational model of human decision making, to simulate the experimental results. The psychology experiment shows that quantitative, single-attribute labeling schemes have greater usability than multiattribute and binary ones, and that they remain effective under moderate time pressure. The computational model simulates these psychological results and provides explanatory insights into them. This work shows how experimental psychology and computational modeling can contribute to the evaluation and improvement of nutrition-labeling schemes. © 2014 New York Academy of Sciences.

  5. Wires in the soup: quantitative models of cell signaling

    Science.gov (United States)

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  6. A quantitative analysis of instabilities in the linear chiral sigma model

    International Nuclear Information System (INIS)

    Nemes, M.C.; Nielsen, M.; Oliveira, M.M. de; Providencia, J. da

    1990-08-01

    We present a method to construct a complete set of stationary states corresponding to small amplitude motion which naturally includes the continuum solution. The energy wheighted sum rule (EWSR) is shown to provide for a quantitative criterium on the importance of instabilities which is known to occur in nonasymptotically free theories. Out results for the linear σ model showed be valid for a large class of models. A unified description of baryon and meson properties in terms of the linear σ model is also given. (author)

  7. Evaluating quantitative and qualitative models: An application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  8. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  9. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  10. The impact of negative childbirth experience on future reproductive decisions: A quantitative systematic review.

    Science.gov (United States)

    Shorey, Shefaly; Yang, Yen Yen; Ang, Emily

    2018-06-01

    The aim of this study was to systematically retrieve, critique and synthesize available evidence regarding the association between negative childbirth experiences and future reproductive decisions. A child's birth is often a joyous event; however, there is a proportion of women who undergo negative childbirth experiences that have long-term implications on their reproductive decisions. A systematic review of quantitative studies was undertaken using Joanna Briggs Institute's methods. A search was carried out in CINAHL Plus with Full Text, Embase, PsycINFO, PubMed, Scopus and Web of Science from January 1996 - July 2016. Studies that fulfilled the inclusion criteria were assessed by two independent reviewers using the Joanna Briggs Institute's Critical Appraisal Tools. Data were extracted under subheadings adapted from the institute's data extraction forms. Twelve studies, which examined either one or more influences of negative childbirth experiences, were identified. The included studies were either cohort or cross-sectional designs. Five studies observed positive associations between prior negative childbirth experiences and decisions to not have another child, three studies found positive associations between negative childbirth experiences and decisions to delay a subsequent birth and six studies concluded positive associations between negative childbirth experiences and maternal requests for caesarean section in subsequent pregnancies. To receive a holistic understanding on negative childbirth experiences, a suitable definition and validated measuring tools should be used to understand this phenomenon. Future studies or reviews should include a qualitative component and/or the exploration of specific factors such as cultural and regional differences that influence childbirth experiences. © 2018 John Wiley & Sons Ltd.

  11. Director gliding in a nematic liquid crystal layer: Quantitative comparison with experiments

    Science.gov (United States)

    Mema, E.; Kondic, L.; Cummings, L. J.

    2018-03-01

    The interaction between nematic liquid crystals and polymer-coated substrates may lead to slow reorientation of the easy axis (so-called "director gliding") when a prolonged external field is applied. We consider the experimental evidence of zenithal gliding observed by Joly et al. [Phys. Rev. E 70, 050701 (2004), 10.1103/PhysRevE.70.050701] and Buluy et al. [J. Soc. Inf. Disp. 14, 603 (2006), 10.1889/1.2235686] as well as azimuthal gliding observed by S. Faetti and P. Marianelli [Liq. Cryst. 33, 327 (2006), 10.1080/02678290500512227], and we present a simple, physically motivated model that captures the slow dynamics of gliding, both in the presence of an electric field and after the electric field is turned off. We make a quantitative comparison of our model results and the experimental data and conclude that our model explains the gliding evolution very well.

  12. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  13. Integration of CFD codes and advanced combustion models for quantitative burnout determination

    Energy Technology Data Exchange (ETDEWEB)

    Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)

    2007-10-15

    CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.

  14. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  15. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  16. Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.

    Science.gov (United States)

    Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L

    2017-10-01

    The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (pEEG patterns such as generalized periodic discharges (pEEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The 'model omnitron' proposed experiment

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-05-01

    The Model Omitron is a compact tokamak experiment which is designed by the Fusion Engineering Unit of ENEA and CITIF CONSORTIUM. The building of Model Omitron would allow for full testing of Omitron engineering, and partial testing of Omitron physics -at about 1/20 of the cost that has been estimated for the larger parent machine. In particular, due to the unusually large ohmic power densities (up to 100 times the nominal value in the Frascati FTU experiment), in Model Omitron the radial energy flux is reaching values comparable or higher than envisaged of the larger ignition experiments Omitron, Ignitor and Iter. Consequently, conditions are expected to occur at the plasma border in the scrape-off layer of Model Omitron, which are representative of the quoted larger experiments. Moreover, since all this will occur under ohmic heating alone, one will hopefully be able to derive an energy transport model fo the ohmic heating regime that is valid over a range of plasma parameters (in particular, of the temperature parameter) wider than it was possible before. In the Model Omitron experiment, finally - by reducing the plasma current and/or the toroidal field down to, say, 1/3 or 1/4 of the nominal values -additional topics can be tackled, such as: large safety-factor configurations (of interest for improving confinement), large aspect-ratio configurations (of interest for the investigation of advanced concepts in tokamaks), high beta (with RF heating -also of interest for the investigation of advanced concepts in tokamaks), long pulse discharges (of interest for demonstrating stationary conditions in the current profile)

  18. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  19. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  20. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    2000-07-01

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  1. Qualitative versus quantitative methods in psychiatric research.

    Science.gov (United States)

    Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S

    2012-01-01

    Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.

  2. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  3. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  4. Modeling a High Explosive Cylinder Experiment

    Science.gov (United States)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  5. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  6. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...... of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study....

  7. MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments

    NARCIS (Netherlands)

    Bustin, S.A.; Beaulieu, J.F.; Huggett, J.; Jaggi, R.; Kibenge, F.S.; Olsvik, P.A.; Penning, L.C.; Toegel, S.

    2010-01-01

    MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments Stephen A Bustin1 , Jean-François Beaulieu2 , Jim Huggett3 , Rolf Jaggi4 , Frederick SB Kibenge5 , Pål A Olsvik6 , Louis C Penning7 and Stefan Toegel8 1 Centre for

  8. Absolute quantitation of myocardial blood flow with 201Tl and dynamic SPECT in canine: optimisation and validation of kinetic modelling

    International Nuclear Information System (INIS)

    Iida, Hidehiro; Kim, Kyeong-Min; Nakazawa, Mayumi; Sohlberg, Antti; Zeniya, Tsutomu; Hayashi, Takuya; Watabe, Hiroshi; Eberl, Stefan; Tamura, Yoshikazu; Ono, Yukihiko

    2008-01-01

    201 Tl has been extensively used for myocardial perfusion and viability assessment. Unlike 99m Tc-labelled agents, such as 99m Tc-sestamibi and 99m Tc-tetrofosmine, the regional concentration of 201 Tl varies with time. This study is intended to validate a kinetic modelling approach for in vivo quantitative estimation of regional myocardial blood flow (MBF) and volume of distribution of 201 Tl using dynamic SPECT. Dynamic SPECT was carried out on 20 normal canines after the intravenous administration of 201 Tl using a commercial SPECT system. Seven animals were studied at rest, nine during adenosine infusion, and four after beta-blocker administration. Quantitative images were reconstructed with a previously validated technique, employing OS-EM with attenuation-correction, and transmission-dependent convolution subtraction scatter correction. Measured regional time-activity curves in myocardial segments were fitted to two- and three-compartment models. Regional MBF was defined as the influx rate constant (K 1 ) with corrections for the partial volume effect, haematocrit and limited first-pass extraction fraction, and was compared with that determined from radio-labelled microspheres experiments. Regional time-activity curves responded well to pharmacological stress. Quantitative MBF values were higher with adenosine and decreased after beta-blocker compared to a resting condition. MBFs obtained with SPECT (MBF SPECT ) correlated well with the MBF values obtained by the radio-labelled microspheres (MBF MS ) (MBF SPECT = -0.067 + 1.042 x MBF MS , p 201 Tl and dynamic SPECT. (orig.)

  9. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  10. Developing confidence in a coupled TH model based on the results of experiment by using engineering scale test facility, 'COUPLE'

    International Nuclear Information System (INIS)

    Fujisaki, Kiyoshi; Suzuki, Hideaki; Fujita, Tomoo

    2008-03-01

    It is necessary to understand quantitative changes of near-field conditions and processes over time and space for modeling the near-field evolution after emplacement of engineered barriers. However, the coupled phenomena in near-field are complicated because thermo-, hydro-, mechanical, chemical processes will interact each other. The question is, therefore, whether the applied model will represent the coupled behavior adequately or not. In order to develop confidence in the modeling, it is necessary to compare with results of coupled behavior experiments in laboratory or in site. In this report, we evaluated the applicability of a coupled T-H model under the conditions of simulated near-field for the results of coupled T-H experiment in laboratory. As a result, it has been shown that the fitting by the modeling with the measured data is reasonable under this condition. (author)

  11. Quantitative prediction of drug side effects based on drug-related features.

    Science.gov (United States)

    Niu, Yanqing; Zhang, Wen

    2017-09-01

    Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.

  12. The Impact of Student Teaching Experience on Pre-Service Teachers' Readiness for Technology Integration: A Mixed Methods Study with Growth Curve Modeling

    Science.gov (United States)

    Sun, Yan; Strobel, Johannes; Newby, Timothy J.

    2017-01-01

    Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…

  13. Creating Research-Rich Learning Experiences and Quantitative Skills in a 1st Year Earth Systems Course

    Science.gov (United States)

    King, P. L.; Eggins, S.; Jones, S.

    2014-12-01

    We are creating a 1st year Earth Systems course at the Australian National University that is built around research-rich learning experiences and quantitative skills. The course has top students including ≤20% indigenous/foreign students; nonetheless, students' backgrounds in math and science vary considerably posing challenges for learning. We are addressing this issue and aiming to improve knowledge retention and deep learning by changing our teaching approach. In 2013-2014, we modified the weekly course structure to a 1hr lecture; a 2hr workshop with hands-on activities; a 2hr lab; an assessment piece covering all face-to-face activities; and a 1hr tutorial. Our new approach was aimed at: 1) building student confidence with data analysis and quantitative skills through increasingly difficult tasks in science, math, physics, chemistry, climate science and biology; 2) creating effective learning groups using name tags and a classroom with 8-person tiered tables; 3) requiring students to apply new knowledge to new situations in group activities, two 1-day field trips and assessment items; 4) using pre-lab and pre-workshop exercises to promote prior engagement with key concepts; 5) adding open-ended experiments to foster structured 'scientific play' or enquiry and creativity; and 6) aligning the assessment with the learning outcomes and ensuring that it contains authentic and challenging southern hemisphere problems. Students were asked to design their own ocean current experiment in the lab and we were astounded by their ingenuity: they simulated the ocean currents off Antarctica; varied water density to verify an equation; and examined the effect of wind and seafloor topography on currents. To evaluate changes in student learning, we conducted surveys in 2013 and 2014. In 2014, we found higher levels of student engagement with the course: >~80% attendance rates and >~70% satisfaction (20% neutral). The 2014 cohort felt that they were more competent in writing

  14. Experience economy meets business model design

    DEFF Research Database (Denmark)

    Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig

    2012-01-01

    Through the last decade the experience economy has found solid ground and manifested itself as a parameter where business and organizations can differentiate from competitors. The fundamental premise is the one found in Pine & Gilmores model from 1999 over 'the progression of economic value' where...... produced, designed or staged experience that gains the most profit or creates return of investment. It becomes more obvious that other parameters in the future can be a vital part of the experience economy and one of these is business model innovation. Business model innovation is about continuous...

  15. A three-dimensional stratigraphic model for aggrading submarine channels based on laboratory experiments, numerical modeling, and sediment cores

    Science.gov (United States)

    Limaye, A. B.; Komatsu, Y.; Suzuki, K.; Paola, C.

    2017-12-01

    Turbidity currents deliver clastic sediment from continental margins to the deep ocean, and are the main driver of landscape and stratigraphic evolution in many low-relief, submarine environments. The sedimentary architecture of turbidites—including the spatial organization of coarse and fine sediments—is closely related to the aggradation, scour, and lateral shifting of channels. Seismic stratigraphy indicates that submarine, meandering channels often aggrade rapidly relative to lateral shifting, and develop channel sand bodies with high vertical connectivity. In comparison, the stratigraphic architecture developed by submarine, braided is relatively uncertain. We present a new stratigraphic model for submarine braided channels that integrates predictions from laboratory experiments and flow modeling with constraints from sediment cores. In the laboratory experiments, a saline density current developed subaqueous channels in plastic sediment. The channels aggraded to form a deposit with a vertical scale of approximately five channel depths. We collected topography data during aggradation to (1) establish relative stratigraphic age, and (2) estimate the sorting patterns of a hypothetical grain size distribution. We applied a numerical flow model to each topographic surface and used modeled flow depth as a proxy for relative grain size. We then conditioned the resulting stratigraphic model to observed grain size distributions using sediment core data from the Nankai Trough, offshore Japan. Using this stratigraphic model, we establish new, quantitative predictions for the two- and three-dimensional connectivity of coarse sediment as a function of fine-sediment fraction. Using this case study as an example, we will highlight outstanding challenges in relating the evolution of low-relief landscapes to the stratigraphic record.

  16. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  17. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  18. EMC3-EIRENE modeling of toroidally-localized divertor gas injection experiments on Alcator C-Mod

    Energy Technology Data Exchange (ETDEWEB)

    Lore, J.D., E-mail: lorejd@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Reinke, M.L. [York Plasma Institute, Department of Physics, University of York, Heslington, York YO10 5DD (United Kingdom); LaBombard, B. [Plasma Science and Fusion Center, MIT, Cambridge, MA 02139 (United States); Lipschultz, B. [York Plasma Institute, Department of Physics, University of York, Heslington, York YO10 5DD (United Kingdom); Churchill, R.M. [Plasma Science and Fusion Center, MIT, Cambridge, MA 02139 (United States); Pitts, R.A. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Feng, Y. [Max Planck Institute for Plasma Physics, Greifswald (Germany)

    2015-08-15

    Experiments on Alcator C-Mod with toroidally and poloidally localized divertor nitrogen injection have been modeled using the three-dimensional edge transport code EMC3-EIRENE to elucidate the mechanisms driving measured toroidal asymmetries. In these experiments five toroidally distributed gas injectors in the private flux region were sequentially activated in separate discharges resulting in clear evidence of toroidal asymmetries in radiated power and nitrogen line emission as well as a ∼50% toroidal modulation in electron pressure at the divertor target. The pressure modulation is qualitatively reproduced by the modeling, with the simulation yielding a toroidal asymmetry in the heat flow to the outer strike point. Toroidal variation in impurity line emission is qualitatively matched in the scrape-off layer above the strike point, however kinetic corrections and cross-field drifts are likely required to quantitatively reproduce impurity behavior in the private flux region and electron temperatures and densities directly in front of the target.

  19. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  20. Quantitative Experiments to Explain the Change of Seasons

    Science.gov (United States)

    Testa, Italo; Busarello, Gianni; Puddu, Emanuella; Leccia, Silvio; Merluzzi, Paola; Colantonio, Arturo; Moretti, Maria Ida; Galano, Silvia; Zappia, Alessandro

    2015-01-01

    The science education literature shows that students have difficulty understanding what causes the seasons. Incorrect explanations are often due to a lack of knowledge about the physical mechanisms underlying this phenomenon. To address this, we present a module in which the students engage in quantitative measurements with a photovoltaic panel to…

  1. Quantitative habitability.

    Science.gov (United States)

    Shock, Everett L; Holland, Melanie E

    2007-12-01

    A framework is proposed for a quantitative approach to studying habitability. Considerations of environmental supply and organismal demand of energy lead to the conclusions that power units are most appropriate and that the units for habitability become watts per organism. Extreme and plush environments are revealed to be on a habitability continuum, and extreme environments can be quantified as those where power supply only barely exceeds demand. Strategies for laboratory and field experiments are outlined that would quantify power supplies, power demands, and habitability. An example involving a comparison of various metabolisms pursued by halophiles is shown to be well on the way to a quantitative habitability analysis.

  2. Atmospheric statistical dynamic models. Climate experiments: albedo experiments with a zonal atmospheric model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    The zonal model experiments with modified surface boundary conditions suggest an initial chain of feedback processes that is largest at the site of the perturbation: deforestation and/or desertification → increased surface albedo → reduced surface absorption of solar radiation → surface cooling and reduced evaporation → reduced convective activity → reduced precipitation and latent heat release → cooling of upper troposphere and increased tropospheric lapse rates → general global cooling and reduced precipitation. As indicated above, although the two experiments give similar overall global results, the location of the perturbation plays an important role in determining the response of the global circulation. These two-dimensional model results are also consistent with three-dimensional model experiments. These results have tempted us to consider the possibility that self-induced growth of the subtropical deserts could serve as a possible mechanism to cause the initial global cooling that then initiates a glacial advance thus activating the positive feedback loop involving ice-albedo feedback (also self-perpetuating). Reversal of the cycle sets in when the advancing ice cover forces the wave-cyclone tracks far enough equatorward to quench (revegetate) the subtropical deserts

  3. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    Science.gov (United States)

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  4. Reactive transport modelling of a heating and radiation experiment in the Boom clay (Belgium)

    International Nuclear Information System (INIS)

    Montenegro, L.; Samper, J.; Delgado, J.

    2003-01-01

    Most countries around the world consider Deep Geological Repositories (DGR) as the most safe option for the final disposal of high level radioactive waste (HLW). DGR is based on adopting a system of multiple barriers between the HLW and the biosphere. Underground laboratories provide information about the behaviour of these barriers at real conditions. Here we present a reactive transport model for the CERBERUS experiment performed at the HADES underground laboratory at Mol (Belgium) in order to characterize the thermal (T), hydrodynamic (H) and geochemical (G) behaviour of the Boon clay. This experiment is unique because it addresses the combined effect of heat and radiation produced by the storage of HLW in a DGR. Reactive transport models which are solved with CORE, are used to perform quantitative predictions of Boom clay thermo-hydro-geochemical (THG) behaviour. Numerical results indicate that heat and radiation cause a slight oxidation near of the radioactive source, pyrite dissolution, a pH decrease and slight changes in the pore water chemical composition of the Boom clay. (Author) 33 refs

  5. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  6. Prospective Middle-School Mathematics Teachers' Quantitative Reasoning and Their Support for Students' Quantitative Reasoning

    Science.gov (United States)

    Kabael, Tangul; Akin, Ayca

    2018-01-01

    The aim of this research is to examine prospective mathematics teachers' quantitative reasoning, their support for students' quantitative reasoning and the relationship between them, if any. The teaching experiment was used as the research method in this qualitatively designed study. The data of the study were collected through a series of…

  7. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  8. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  9. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  10. Variable selection in near infrared spectroscopy for quantitative models of homologous analogs of cephalosporins

    Directory of Open Access Journals (Sweden)

    Yan-Chun Feng

    2014-07-01

    Full Text Available Two universal spectral ranges (4550–4100 cm-1 and 6190–5510 cm-1 for construction of quantitative models of homologous analogs of cephalosporins were proposed by evaluating the performance of five spectral ranges and their combinations, using three data sets of cephalosporins for injection, i.e., cefuroxime sodium, ceftriaxone sodium and cefoperazone sodium. Subsequently, the proposed ranges were validated by using eight calibration sets of other homologous analogs of cephalosporins for injection, namely cefmenoxime hydrochloride, ceftezole sodium, cefmetazole, cefoxitin sodium, cefotaxime sodium, cefradine, cephazolin sodium and ceftizoxime sodium. All the constructed quantitative models for the eight kinds of cephalosporins using these universal ranges could fulfill the requirements for quick quantification. After that, competitive adaptive reweighted sampling (CARS algorithm and infrared (IR–near infrared (NIR two-dimensional (2D correlation spectral analysis were used to determine the scientific basis of these two spectral ranges as the universal regions for the construction of quantitative models of cephalosporins. The CARS algorithm demonstrated that the ranges of 4550–4100 cm-1 and 6190–5510 cm-1 included some key wavenumbers which could be attributed to content changes of cephalosporins. The IR–NIR 2D spectral analysis showed that certain wavenumbers in these two regions have strong correlations to the structures of those cephalosporins that were easy to degrade.

  11. Quantitative analysis of protein-ligand interactions by NMR.

    Science.gov (United States)

    Furukawa, Ayako; Konuma, Tsuyoshi; Yanaka, Saeko; Sugase, Kenji

    2016-08-01

    Protein-ligand interactions have been commonly studied through static structures of the protein-ligand complex. Recently, however, there has been increasing interest in investigating the dynamics of protein-ligand interactions both for fundamental understanding of the underlying mechanisms and for drug development. NMR is a versatile and powerful tool, especially because it provides site-specific quantitative information. NMR has widely been used to determine the dissociation constant (KD), in particular, for relatively weak interactions. The simplest NMR method is a chemical-shift titration experiment, in which the chemical-shift changes of a protein in response to ligand titration are measured. There are other quantitative NMR methods, but they mostly apply only to interactions in the fast-exchange regime. These methods derive the dissociation constant from population-averaged NMR quantities of the free and bound states of a protein or ligand. In contrast, the recent advent of new relaxation-based experiments, including R2 relaxation dispersion and ZZ-exchange, has enabled us to obtain kinetic information on protein-ligand interactions in the intermediate- and slow-exchange regimes. Based on R2 dispersion or ZZ-exchange, methods that can determine the association rate, kon, dissociation rate, koff, and KD have been developed. In these approaches, R2 dispersion or ZZ-exchange curves are measured for multiple samples with different protein and/or ligand concentration ratios, and the relaxation data are fitted to theoretical kinetic models. It is critical to choose an appropriate kinetic model, such as the two- or three-state exchange model, to derive the correct kinetic information. The R2 dispersion and ZZ-exchange methods are suitable for the analysis of protein-ligand interactions with a micromolar or sub-micromolar dissociation constant but not for very weak interactions, which are typical in very fast exchange. This contrasts with the NMR methods that are used

  12. Refining the statistical model for quantitative immunostaining of surface-functionalized nanoparticles by AFM.

    Science.gov (United States)

    MacCuspie, Robert I; Gorka, Danielle E

    2013-10-01

    Recently, an atomic force microscopy (AFM)-based approach for quantifying the number of biological molecules conjugated to a nanoparticle surface at low number densities was reported. The number of target molecules conjugated to the analyte nanoparticle can be determined with single nanoparticle fidelity using antibody-mediated self-assembly to decorate the analyte nanoparticles with probe nanoparticles (i.e., quantitative immunostaining). This work refines the statistical models used to quantitatively interpret the observations when AFM is used to image the resulting structures. The refinements add terms to the previous statistical models to account for the physical sizes of the analyte nanoparticles, conjugated molecules, antibodies, and probe nanoparticles. Thus, a more physically realistic statistical computation can be implemented for a given sample of known qualitative composition, using the software scripts provided. Example AFM data sets, using horseradish peroxidase conjugated to gold nanoparticles, are presented to illustrate how to implement this method successfully.

  13. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  14. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    International Nuclear Information System (INIS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J

    2008-01-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle

  15. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    Science.gov (United States)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-01

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is

  16. QUAIL: A Quantitative Security Analyzer for Imperative Code

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Wasowski, Andrzej; Traonouez, Louis-Marie

    2013-01-01

    Quantitative security analysis evaluates and compares how effectively a system protects its secret data. We introduce QUAIL, the first tool able to perform an arbitrary-precision quantitative analysis of the security of a system depending on private information. QUAIL builds a Markov Chain model...... of the system’s behavior as observed by an attacker, and computes the correlation between the system’s observable output and the behavior depending on the private information, obtaining the expected amount of bits of the secret that the attacker will infer by observing the system. QUAIL is able to evaluate...... the safety of randomized protocols depending on secret data, allowing to verify a security protocol’s effectiveness. We experiment with a few examples and show that QUAIL’s security analysis is more accurate and revealing than results of other tools...

  17. Combining in situ transmission electron microscopy irradiation experiments with cluster dynamics modeling to study nanoscale defect agglomeration in structural metals

    International Nuclear Information System (INIS)

    Xu Donghua; Wirth, Brian D.; Li Meimei; Kirk, Marquis A.

    2012-01-01

    We present a combinatorial approach that integrates state-of-the-art transmission electron microscopy (TEM) in situ irradiation experiments and high-performance computing techniques to study irradiation defect dynamics in metals. Here, we have studied the evolution of visible defect clusters in nanometer-thick molybdenum foils under 1 MeV krypton ion irradiation at 80 °C through both cluster dynamics modeling and in situ TEM experiments. The experimental details are reported elsewhere; we focus here on the details of model construction and comparing the model with the experiments. The model incorporates continuous production of point defects and/or small clusters, and the accompanying interactions, which include clustering, recombination and loss to the surfaces that result from the diffusion of the mobile defects. To account for the strong surface effect in thin TEM foils, the model includes one-dimensional spatial dependence along the foil depth, and explicitly treats the surfaces as black sinks. The rich amount of data (cluster number density and size distribution at a variety of foil thickness, irradiation dose and dose rate) offered by the advanced in situ experiments has allowed close comparisons with computer modeling and permitted significant validation and optimization of the model in terms of both physical model construct (damage production mode, identities of mobile defects) and parameterization (diffusivities of mobile defects). The optimized model exhibits good qualitative and quantitative agreement with the in situ TEM experiments. The combinatorial approach is expected to bring a unique opportunity for the study of radiation damage in structural materials.

  18. Quantitative Finance

    Science.gov (United States)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  19. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  20. Quantitative tradeoffs between spatial, temporal, and thermometric resolution of nonresonant Raman thermometry for dynamic experiments.

    Science.gov (United States)

    McGrane, Shawn D; Moore, David S; Goodwin, Peter M; Dattelbaum, Dana M

    2014-01-01

    The ratio of Stokes to anti-Stokes nonresonant spontaneous Raman can provide an in situ thermometer that is noncontact, independent of any material specific parameters or calibrations, can be multiplexed spatially with line imaging, and can be time resolved for dynamic measurements. However, spontaneous Raman cross sections are very small, and thermometric measurements are often limited by the amount of laser energy that can be applied without damaging the sample or changing its temperature appreciably. In this paper, we quantitatively detail the tradeoff space between spatial, temporal, and thermometric accuracy measurable with spontaneous Raman. Theoretical estimates are pinned to experimental measurements to form realistic expectations of the resolution tradeoffs appropriate to various experiments. We consider the effects of signal to noise, collection efficiency, laser heating, pulsed laser ablation, and blackbody emission as limiting factors, provide formulae to help choose optimal conditions and provide estimates relevant to planning experiments along with concrete examples for single-shot measurements.

  1. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  2. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    Science.gov (United States)

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  3. Fixing the cracks in the crystal ball: A maturity model for quantitative risk assessment

    International Nuclear Information System (INIS)

    Rae, Andrew; Alexander, Rob; McDermid, John

    2014-01-01

    Quantitative risk assessment (QRA) is widely practiced in system safety, but there is insufficient evidence that QRA in general is fit for purpose. Defenders of QRA draw a distinction between poor or misused QRA and correct, appropriately used QRA, but this distinction is only useful if we have robust ways to identify the flaws in an individual QRA. In this paper we present a comprehensive maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature and in a collection of risk assessment peer reviews. We provide initial validation of the completeness and realism of the model. Our risk assessment maturity model provides a way to prioritise both process development within an organisation and empirical research within the QRA community. - Highlights: • Quantitative risk assessment (QRA) is widely practiced, but there is insufficient evidence that it is fit for purpose. • A given QRA may be good, or it may not – we need systematic ways to distinguish this. • We have created a maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature. • We have provided initial validation of the completeness and realism of the model. • The maturity model can also be used to prioritise QRA research discipline-wide

  4. Discussions on the non-equilibrium effects in the quantitative phase field model of binary alloys

    International Nuclear Information System (INIS)

    Zhi-Jun, Wang; Jin-Cheng, Wang; Gen-Cang, Yang

    2010-01-01

    All the quantitative phase field models try to get rid of the artificial factors of solutal drag, interface diffusion and interface stretch in the diffuse interface. These artificial non-equilibrium effects due to the introducing of diffuse interface are analysed based on the thermodynamic status across the diffuse interface in the quantitative phase field model of binary alloys. Results indicate that the non-equilibrium effects are related to the negative driving force in the local region of solid side across the diffuse interface. The negative driving force results from the fact that the phase field model is derived from equilibrium condition but used to simulate the non-equilibrium solidification process. The interface thickness dependence of the non-equilibrium effects and its restriction on the large scale simulation are also discussed. (cross-disciplinary physics and related areas of science and technology)

  5. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  6. Quantitative model for the blood pressure‐lowering interaction of valsartan and amlodipine

    Science.gov (United States)

    Heo, Young‐A; Holford, Nick; Kim, Yukyung; Son, Mijeong

    2016-01-01

    Aims The objective of this study was to develop a population pharmacokinetic (PK) and pharmacodynamic (PD) model to quantitatively describe the antihypertensive effect of combined therapy with amlodipine and valsartan. Methods PK modelling was used with data collected from 48 healthy volunteers receiving a single dose of combined formulation of 10 mg amlodipine and 160 mg valsartan. Systolic (SBP) and diastolic blood pressure (DBP) were recorded during combined administration. SBP and DBP data for each drug alone were gathered from the literature. PKPD models of each drug and for combined administration were built with NONMEM 7.3. Results A two‐compartment model with zero order absorption best described the PK data of both drugs. Amlodipine and valsartan monotherapy effects on SBP and DBP were best described by an I max model with an effect compartment delay. Combined therapy was described using a proportional interaction term as follows: (D1 + D2) +ALPHA×(D1 × D2). D1 and D2 are the predicted drug effects of amlodipine and valsartan monotherapy respectively. ALPHA is the interaction term for combined therapy. Quantitative estimates of ALPHA were −0.171 (95% CI: −0.218, −0.143) for SBP and −0.0312 (95% CI: −0.07739, −0.00283) for DBP. These infra‐additive interaction terms for both SBP and DBP were consistent with literature results for combined administration of drugs in these classes. Conclusion PKPD models for SBP and DBP successfully described the time course of the antihypertensive effects of amlodipine and valsartan. An infra‐additive interaction between amlodipine and valsartan when used in combined administration was confirmed and quantified. PMID:27504853

  7. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    Science.gov (United States)

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  8. Bridging the qualitative-quantitative divide: Experiences from conducting a mixed methods evaluation in the RUCAS programme.

    Science.gov (United States)

    Makrakis, Vassilios; Kostoulas-Makrakis, Nelly

    2016-02-01

    Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Mixing quantitative with qualitative methods

    DEFF Research Database (Denmark)

    Morrison, Ann; Viller, Stephen; Heck, Tamara

    2017-01-01

    with or are considering, researching, or working with both quantitative and qualitative evaluation methods (in academia or industry), join us in this workshop. In particular, we look at adding quantitative to qualitative methods to build a whole picture of user experience. We see a need to discuss both quantitative...... and qualitative research because there is often a perceived lack of understanding of the rigor involved in each. The workshop will result in a White Paper on the latest developments in this field, within Australia and comparative with international work. We anticipate sharing submissions and workshop outcomes...

  10. Making coarse grained polymer simulations quantitatively predictive for statics and dynamics

    Science.gov (United States)

    Kremer, Kurt

    2010-03-01

    By combining input from short simulation runs of rather small systems with all atomistic details together with properly adapted coarse grained models we are able quantitatively predict static and especially dynamical properties of both pure polymer melts of long fully entangled but also of systems with low molecular weight additives. Comparisons to rather different experiments such as diffusion constant measurements or NMR relaxation experiments show a remarkable quantitative agreement without any adjustable parameter. Reintroduction of chemical details into the coarse grained trajectories allows the study of long time trajectories in all atomistic detail providing the opportunity for rather different means of data analysis. References: V. Harmandaris, K. Kremer, Macromolecules, in press (2009) V. Harmandaris et al, Macromolecules, 40, 7026 (2007) B. Hess, S. Leon, N. van der Vegt, K. Kremer, Soft Matter 2, 409 (2006) D. Fritz et al, Soft Matter 5, 4556 (2009)

  11. Modeling of laser-driven hydrodynamics experiments

    Science.gov (United States)

    di Stefano, Carlos; Doss, Forrest; Rasmus, Alex; Flippo, Kirk; Desjardins, Tiffany; Merritt, Elizabeth; Kline, John; Hager, Jon; Bradley, Paul

    2017-10-01

    Correct interpretation of hydrodynamics experiments driven by a laser-produced shock depends strongly on an understanding of the time-dependent effect of the irradiation conditions on the flow. In this talk, we discuss the modeling of such experiments using the RAGE radiation-hydrodynamics code. The focus is an instability experiment consisting of a period of relatively-steady shock conditions in which the Richtmyer-Meshkov process dominates, followed by a period of decaying flow conditions, in which the dominant growth process changes to Rayleigh-Taylor instability. The use of a laser model is essential for capturing the transition. also University of Michigan.

  12. Extracting Models in Single Molecule Experiments

    Science.gov (United States)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  13. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    Directory of Open Access Journals (Sweden)

    David Stephens

    Full Text Available There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method.

  14. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  15. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  16. Modelling ohmic confinement experiments on the START tokamak

    International Nuclear Information System (INIS)

    Roach, C.M.

    1996-05-01

    Ohmic confinement data from the tight aspect ratio tokamak START has been analysed using the ASTRA transport simulation code. Neoclassical expressions have been modified to describe tight aspect ratio configurations, and the comparison between START data and models of anomalous transport has been made quantitative using the standard χ 2 test from statistics. Four confinement models (T11, Rebut-Lallia-Watkins, Lackner-Gottardi, and Taroni et al's Bohm model) have been compared with the START data. Three of the models are found to simulate START's electron temperature data moderately well, while Taroni et al's Bohm model overestimates electron temperatures in START by an order of magnitude. Thus comparison with START data tends to discriminate against Bohm models; these models are pessimistic or ITER. (author)

  17. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    Science.gov (United States)

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  18. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.

  19. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    International Nuclear Information System (INIS)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity

  20. Quantitative influence of risk factors on blood glucose level.

    Science.gov (United States)

    Chen, Songjing; Luo, Senlin; Pan, Limin; Zhang, Tiemei; Han, Longfei; Zhao, Haixiu

    2014-01-01

    The aim of this study is to quantitatively analyze the influence of risk factors on the blood glucose level, and to provide theory basis for understanding the characteristics of blood glucose change and confirming the intervention index for type 2 diabetes. The quantitative method is proposed to analyze the influence of risk factors on blood glucose using back propagation (BP) neural network. Ten risk factors are screened first. Then the cohort is divided into nine groups by gender and age. According to the minimum error principle, nine BP models are trained respectively. The quantitative values of the influence of different risk factors on the blood glucose change can be obtained by sensitivity calculation. The experiment results indicate that weight is the leading cause of blood glucose change (0.2449). The second factors are cholesterol, age and triglyceride. The total ratio of these four factors reaches to 77% of the nine screened risk factors. And the sensitivity sequences can provide judgment method for individual intervention. This method can be applied to risk factors quantitative analysis of other diseases and potentially used for clinical practitioners to identify high risk populations for type 2 diabetes as well as other disease.

  1. Quantifying Zika: Advancing the Epidemiology of Zika With Quantitative Models.

    Science.gov (United States)

    Keegan, Lindsay T; Lessler, Justin; Johansson, Michael A

    2017-12-16

    When Zika virus (ZIKV) emerged in the Americas, little was known about its biology, pathogenesis, and transmission potential, and the scope of the epidemic was largely hidden, owing to generally mild infections and no established surveillance systems. Surges in congenital defects and Guillain-Barré syndrome alerted the world to the danger of ZIKV. In the context of limited data, quantitative models were critical in reducing uncertainties and guiding the global ZIKV response. Here, we review some of the models used to assess the risk of ZIKV-associated severe outcomes, the potential speed and size of ZIKV epidemics, and the geographic distribution of ZIKV risk. These models provide important insights and highlight significant unresolved questions related to ZIKV and other emerging pathogens. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  2. Comparison of three labeled silica nanoparticles used as tracers in transport experiments in porous media. Part II: Transport experiments and modeling

    International Nuclear Information System (INIS)

    Vitorge, Elsa; Szenknect, Stéphanie; Martins, Jean M.-F.; Barthès, Véronique; Gaudet, Jean-Paul

    2014-01-01

    Three types of labeled silica nanoparticles were used in transport experiments in saturated sand. The goal of this study was to evaluate both the efficiency of labeling techniques (fluorescence (FITC), metal (Ag(0) core) and radioactivity ( 110m Ag(0) core)) in realistic transport conditions and the reactive transport of silica nanocolloids of variable size and concentration in porous media. Experimental results obtained under contrasted experimental conditions revealed that deposition in sand is controlled by nanoparticles size and ionic strength of the solution. A mathematical model is proposed to quantitatively describe colloid transport. Fluorescent labeling is widely used to study fate of colloids in soils but was the less sensitive one. Ag(0) labeling with ICP-MS detection was found to be very sensitive to measure deposition profiles. Radiolabeled ( 110m Ag(0)) nanoparticles permitted in situ detection. Results obtained with radiolabeled nanoparticles are wholly original and might be used for improving the modeling of deposition and release dynamics. -- Highlights: • Three kinds of labeled nanotracers were used in transport experiments in sand columns. • They were used as surrogates of silica nanoparticles or mineral colloid. • Deposition depending on colloid size and ionic strength was observed and modeled. • Fluorescence labeling had the worse detection limit but was the more convenient. • Radiolabeled nanotracers were detected in situ in a non destructive way. -- Follow the kinetics of transport, deposition and release of silica nanoparticles with suitably labeled nanoparticles

  3. CFD and FEM modeling of PPOOLEX experiments

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-01-15

    Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)

  4. Theory and Practice in Quantitative Genetics

    DEFF Research Database (Denmark)

    Posthuma, Daniëlle; Beem, A Leo; de Geus, Eco J C

    2003-01-01

    With the rapid advances in molecular biology, the near completion of the human genome, the development of appropriate statistical genetic methods and the availability of the necessary computing power, the identification of quantitative trait loci has now become a realistic prospect for quantitative...... geneticists. We briefly describe the theoretical biometrical foundations underlying quantitative genetics. These theoretical underpinnings are translated into mathematical equations that allow the assessment of the contribution of observed (using DNA samples) and unobserved (using known genetic relationships......) genetic variation to population variance in quantitative traits. Several statistical models for quantitative genetic analyses are described, such as models for the classical twin design, multivariate and longitudinal genetic analyses, extended twin analyses, and linkage and association analyses. For each...

  5. Human eyeball model reconstruction and quantitative analysis.

    Science.gov (United States)

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  6. Quantitative methods in psychology: inevitable and useless

    Directory of Open Access Journals (Sweden)

    Aaro Toomela

    2010-07-01

    Full Text Available Science begins with the question, what do I want to know? Science becomes science, however, only when this question is justified and the appropriate methodology is chosen for answering the research question. Research question should precede the other questions; methods should be chosen according to the research question and not vice versa. Modern quantitative psychology has accepted method as primary; research questions are adjusted to the methods. For understanding thinking in modern quantitative psychology, two epistemologies should be distinguished: structural-systemic that is based on Aristotelian thinking, and associative-quantitative that is based on Cartesian-Humean thinking. The first aims at understanding the structure that underlies the studied processes; the second looks for identification of cause-effect relationships between the events with no possible access to the understanding of the structures that underlie the processes. Quantitative methodology in particular as well as mathematical psychology in general, is useless for answering questions about structures and processes that underlie observed behaviors. Nevertheless, quantitative science is almost inevitable in a situation where the systemic-structural basis of behavior is not well understood; all sorts of applied decisions can be made on the basis of quantitative studies. In order to proceed, psychology should study structures; methodologically, constructive experiments should be added to observations and analytic experiments.

  7. Quantitative Modelling of Trace Elements in Hard Coal.

    Science.gov (United States)

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  8. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  9. A Cytomorphic Chip for Quantitative Modeling of Fundamental Bio-Molecular Circuits.

    Science.gov (United States)

    2015-08-01

    We describe a 0.35 μm BiCMOS silicon chip that quantitatively models fundamental molecular circuits via efficient log-domain cytomorphic transistor equivalents. These circuits include those for biochemical binding with automatic representation of non-modular and loading behavior, e.g., in cascade and fan-out topologies; for representing variable Hill-coefficient operation and cooperative binding; for representing inducer, transcription-factor, and DNA binding; for probabilistic gene transcription with analogic representations of log-linear and saturating operation; for gain, degradation, and dynamics of mRNA and protein variables in transcription and translation; and, for faithfully representing biological noise via tunable stochastic transistor circuits. The use of on-chip DACs and ADCs enables multiple chips to interact via incoming and outgoing molecular digital data packets and thus create scalable biochemical reaction networks. The use of off-chip digital processors and on-chip digital memory enables programmable connectivity and parameter storage. We show that published static and dynamic MATLAB models of synthetic biological circuits including repressilators, feed-forward loops, and feedback oscillators are in excellent quantitative agreement with those from transistor circuits on the chip. Computationally intensive stochastic Gillespie simulations of molecular production are also rapidly reproduced by the chip and can be reliably tuned over the range of signal-to-noise ratios observed in biological cells.

  10. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    Science.gov (United States)

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  11. Identifying specific protein interaction partners using quantitative mass spectrometry and bead proteomes

    Science.gov (United States)

    Trinkle-Mulcahy, Laura; Boulon, Séverine; Lam, Yun Wah; Urcia, Roby; Boisvert, François-Michel; Vandermoere, Franck; Morrice, Nick A.; Swift, Sam; Rothbauer, Ulrich; Leonhardt, Heinrich; Lamond, Angus

    2008-01-01

    The identification of interaction partners in protein complexes is a major goal in cell biology. Here we present a reliable affinity purification strategy to identify specific interactors that combines quantitative SILAC-based mass spectrometry with characterization of common contaminants binding to affinity matrices (bead proteomes). This strategy can be applied to affinity purification of either tagged fusion protein complexes or endogenous protein complexes, illustrated here using the well-characterized SMN complex as a model. GFP is used as the tag of choice because it shows minimal nonspecific binding to mammalian cell proteins, can be quantitatively depleted from cell extracts, and allows the integration of biochemical protein interaction data with in vivo measurements using fluorescence microscopy. Proteins binding nonspecifically to the most commonly used affinity matrices were determined using quantitative mass spectrometry, revealing important differences that affect experimental design. These data provide a specificity filter to distinguish specific protein binding partners in both quantitative and nonquantitative pull-down and immunoprecipitation experiments. PMID:18936248

  12. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione

    International Nuclear Information System (INIS)

    Si Hongzong; Wang Tao; Zhang Kejun; Duan Yunbo; Yuan Shuping; Fu Aiping; Hu Zhide

    2007-01-01

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method

  13. Refining Grasp Affordance Models by Experience

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Buch, Anders Glent

    2010-01-01

    We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stable...... with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle...... is bootstrapped with a grasp density formed from visual cues. We show that the robot effectively applies its experience by downweighting poor grasp solutions, which results in increased success rates at subsequent learning cycles. We also present success rates in a practical scenario where a robot needs...

  14. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  15. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models.

    Science.gov (United States)

    Allen, R J; Rieger, T R; Musante, C J

    2016-03-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.

  16. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  17. Development of quantitative atomic modeling for tungsten transport study Using LHD plasma with tungsten pellet injection

    International Nuclear Information System (INIS)

    Murakami, I.; Sakaue, H.A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2014-10-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from currentless plasmas of the Large Helical Device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) lines of W 24+ to W 33+ ions are very sensitive to electron temperature (Te) and useful to examine the tungsten behavior in edge plasmas. Based on the first quantitative analysis of measured spatial profile of W 44+ ion, the tungsten concentration is determined to be n(W 44+ )/n e = 1.4x10 -4 and the total radiation loss is estimated as ∼4 MW, of which the value is roughly half the total NBI power. (author)

  18. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    Directory of Open Access Journals (Sweden)

    Mondry Adrian

    2004-08-01

    Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods

  19. Argonne Bubble Experiment Thermal Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  20. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  1. A quantitative dynamic systems model of health-related quality of life among older adults

    Science.gov (United States)

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  2. Quantitative Reasoning Learning Progressions for Environmental Science: Developing a Framework

    Directory of Open Access Journals (Sweden)

    Robert L. Mayes

    2013-01-01

    Full Text Available Quantitative reasoning is a complex concept with many definitions and a diverse account in the literature. The purpose of this article is to establish a working definition of quantitative reasoning within the context of science, construct a quantitative reasoning framework, and summarize research on key components in that framework. Context underlies all quantitative reasoning; for this review, environmental science serves as the context.In the framework, we identify four components of quantitative reasoning: the quantification act, quantitative literacy, quantitative interpretation of a model, and quantitative modeling. Within each of these components, the framework provides elements that comprise the four components. The quantification act includes the elements of variable identification, communication, context, and variation. Quantitative literacy includes the elements of numeracy, measurement, proportional reasoning, and basic probability/statistics. Quantitative interpretation includes the elements of representations, science diagrams, statistics and probability, and logarithmic scales. Quantitative modeling includes the elements of logic, problem solving, modeling, and inference. A brief comparison of the quantitative reasoning framework with the AAC&U Quantitative Literacy VALUE rubric is presented, demonstrating a mapping of the components and illustrating differences in structure. The framework serves as a precursor for a quantitative reasoning learning progression which is currently under development.

  3. Novel mathematic models for quantitative transitivity of quality-markers in extraction process of the Buyanghuanwu decoction.

    Science.gov (United States)

    Zhang, Yu-Tian; Xiao, Mei-Feng; Deng, Kai-Wen; Yang, Yan-Tao; Zhou, Yi-Qun; Zhou, Jin; He, Fu-Yuan; Liu, Wen-Long

    2018-06-01

    Nowadays, to research and formulate an efficiency extraction system for Chinese herbal medicine, scientists have always been facing a great challenge for quality management, so that the transitivity of Q-markers in quantitative analysis of TCM was proposed by Prof. Liu recently. In order to improve the quality of extraction from raw medicinal materials for clinical preparations, a series of integrated mathematic models for transitivity of Q-markers in quantitative analysis of TCM were established. Buyanghuanwu decoction (BYHWD) was a commonly TCMs prescription, which was used to prevent and treat the ischemic heart and brain diseases. In this paper, we selected BYHWD as an extraction experimental subject to study the quantitative transitivity of TCM. Based on theory of Fick's Rule and Noyes-Whitney equation, novel kinetic models were established for extraction of active components. Meanwhile, fitting out kinetic equations of extracted models and then calculating the inherent parameters in material piece and Q-marker quantitative transfer coefficients, which were considered as indexes to evaluate transitivity of Q-markers in quantitative analysis of the extraction process of BYHWD. HPLC was applied to screen and analyze the potential Q-markers in the extraction process. Fick's Rule and Noyes-Whitney equation were adopted for mathematically modeling extraction process. Kinetic parameters were fitted and calculated by the Statistical Program for Social Sciences 20.0 software. The transferable efficiency was described and evaluated by potential Q-markers transfer trajectory via transitivity availability AUC, extraction ratio P, and decomposition ratio D respectively. The Q-marker was identified with AUC, P, D. Astragaloside IV, laetrile, paeoniflorin, and ferulic acid were studied as potential Q-markers from BYHWD. The relative technologic parameters were presented by mathematic models, which could adequately illustrate the inherent properties of raw materials

  4. Quantitative ptychographic reconstruction by applying a probe constraint

    Science.gov (United States)

    Reinhardt, J.; Schroer, C. G.

    2018-04-01

    The coherent scanning technique X-ray ptychography has become a routine tool for high-resolution imaging and nanoanalysis in various fields of research such as chemistry, biology or materials science. Often the ptychographic reconstruction results are analysed in order to yield absolute quantitative values for the object transmission and illuminating probe function. In this work, we address a common ambiguity encountered in scaling the object transmission and probe intensity via the application of an additional constraint to the reconstruction algorithm. A ptychographic measurement of a model sample containing nanoparticles is used as a test data set against which to benchmark in the reconstruction results depending on the type of constraint used. Achieving quantitative absolute values for the reconstructed object transmission is essential for advanced investigation of samples that are changing over time, e.g., during in-situ experiments or in general when different data sets are compared.

  5. Quantitative Decision Support Requires Quantitative User Guidance

    Science.gov (United States)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  6. Quantitative model of super-Arrhenian behavior in glass forming materials

    Science.gov (United States)

    Caruthers, J. M.; Medvedev, G. A.

    2018-05-01

    The key feature of glass forming liquids is the super-Arrhenian temperature dependence of the mobility, where the mobility can increase by ten orders of magnitude or more as the temperature is decreased if crystallization does not intervene. A fundamental description of the super-Arrhenian behavior has been developed; specifically, the logarithm of the relaxation time is a linear function of 1 /U¯x , where U¯x is the independently determined excess molar internal energy and B is a material constant. This one-parameter mobility model quantitatively describes data for 21 glass forming materials, which are all the materials where there are sufficient experimental data for analysis. The effect of pressure on the loga mobility is also described using the same U¯x(T ,p ) function determined from the difference between the liquid and crystalline internal energies. It is also shown that B is well correlated with the heat of fusion. The prediction of the B /U¯x model is compared to the Adam and Gibbs 1 /T S¯x model, where the B /U¯x model is significantly better in unifying the full complement of mobility data. The implications of the B /U¯x model for the development of a fundamental description of glass are discussed.

  7. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical

  8. Spatiotemporal microbiota dynamics from quantitative in vitro and in silico models of the gut

    Science.gov (United States)

    Hwa, Terence

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth behaviors, which ultimately dictate the gut microbiota composition. Combining measurements of bacterial growth physiology with analysis of published data on human physiology into a quantitative modeling framework, we show how hydrodynamic forces in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla in the gut. Our model quantitatively explains the observed variation of microbiota composition among healthy adults, and predicts colonic water absorption (manifested as stool consistency) and nutrient intake to be two key factors determining this composition. The model further reveals that both factors, which have been identified in recent correlative studies, exert their effects through the same mechanism: changes in colonic pH that differentially affect the growth of different bacteria. Our findings show that a predictive and mechanistic understanding of microbial ecology in the human gut is possible, and offer the hope for the rational design of intervention strategies to actively control the microbiota. This work is supported by the Bill and Melinda Gates Foundation.

  9. A quantitative model of the cardiac ventricular cell incorporating the transverse-axial tubular system

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Christé, G.; Šimurda, J.

    2003-01-01

    Roč. 22, č. 3 (2003), s. 355-368 ISSN 0231-5882 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * tubular system * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 0.794, year: 2003

  10. Quantitative Determination of Aluminum in Deodorant Brands: A Guided Inquiry Learning Experience in Quantitative Analysis Laboratory

    Science.gov (United States)

    Sedwick, Victoria; Leal, Anne; Turner, Dea; Kanu, A. Bakarr

    2018-01-01

    The monitoring of metals in commercial products is essential for protecting public health against the hazards of metal toxicity. This article presents a guided inquiry (GI) experimental lab approach in a quantitative analysis lab class that enabled students' to determine the levels of aluminum in deodorant brands. The utility of a GI experimental…

  11. Interrater reliability of quantitative ultrasound using force feedback among examiners with varied levels of experience

    Directory of Open Access Journals (Sweden)

    Michael O. Harris-Love

    2016-06-01

    Full Text Available Background. Quantitative ultrasound measures are influenced by multiple external factors including examiner scanning force. Force feedback may foster the acquisition of reliable morphometry measures under a variety of scanning conditions. The purpose of this study was to determine the reliability of force-feedback image acquisition and morphometry over a range of examiner-generated forces using a muscle tissue-mimicking ultrasound phantom. Methods. Sixty material thickness measures were acquired from a muscle tissue mimicking phantom using B-mode ultrasound scanning by six examiners with varied experience levels (i.e., experienced, intermediate, and novice. Estimates of interrater reliability and measurement error with force feedback scanning were determined for the examiners. In addition, criterion-based reliability was determined using material deformation values across a range of examiner scanning forces (1–10 Newtons via automated and manually acquired image capture methods using force feedback. Results. All examiners demonstrated acceptable interrater reliability (intraclass correlation coefficient, ICC = .98, p .90, p < .001, independent of their level of experience. The measurement error among all examiners was 1.5%–2.9% across all applied stress conditions. Conclusion. Manual image capture with force feedback may aid the reliability of morphometry measures across a range of examiner scanning forces, and allow for consistent performance among examiners with differing levels of experience.

  12. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  13. Quantitative evaluation methods of skin condition based on texture feature parameters

    Directory of Open Access Journals (Sweden)

    Hui Pang

    2017-03-01

    Full Text Available In order to quantitatively evaluate the improvement of the skin condition after using skin care products and beauty, a quantitative evaluation method for skin surface state and texture is presented, which is convenient, fast and non-destructive. Human skin images were collected by image sensors. Firstly, the median filter of the 3 × 3 window is used and then the location of the hairy pixels on the skin is accurately detected according to the gray mean value and color information. The bilinear interpolation is used to modify the gray value of the hairy pixels in order to eliminate the negative effect of noise and tiny hairs on the texture. After the above pretreatment, the gray level co-occurrence matrix (GLCM is calculated. On the basis of this, the four characteristic parameters, including the second moment, contrast, entropy and correlation, and their mean value are calculated at 45 ° intervals. The quantitative evaluation model of skin texture based on GLCM is established, which can calculate the comprehensive parameters of skin condition. Experiments show that using this method evaluates the skin condition, both based on biochemical indicators of skin evaluation methods in line, but also fully consistent with the human visual experience. This method overcomes the shortcomings of the biochemical evaluation method of skin damage and long waiting time, also the subjectivity and fuzziness of the visual evaluation, which achieves the non-destructive, rapid and quantitative evaluation of skin condition. It can be used for health assessment or classification of the skin condition, also can quantitatively evaluate the subtle improvement of skin condition after using skin care products or stage beauty.

  14. Quantitative evaluation methods of skin condition based on texture feature parameters.

    Science.gov (United States)

    Pang, Hui; Chen, Tianhua; Wang, Xiaoyi; Chang, Zhineng; Shao, Siqi; Zhao, Jing

    2017-03-01

    In order to quantitatively evaluate the improvement of the skin condition after using skin care products and beauty, a quantitative evaluation method for skin surface state and texture is presented, which is convenient, fast and non-destructive. Human skin images were collected by image sensors. Firstly, the median filter of the 3 × 3 window is used and then the location of the hairy pixels on the skin is accurately detected according to the gray mean value and color information. The bilinear interpolation is used to modify the gray value of the hairy pixels in order to eliminate the negative effect of noise and tiny hairs on the texture. After the above pretreatment, the gray level co-occurrence matrix (GLCM) is calculated. On the basis of this, the four characteristic parameters, including the second moment, contrast, entropy and correlation, and their mean value are calculated at 45 ° intervals. The quantitative evaluation model of skin texture based on GLCM is established, which can calculate the comprehensive parameters of skin condition. Experiments show that using this method evaluates the skin condition, both based on biochemical indicators of skin evaluation methods in line, but also fully consistent with the human visual experience. This method overcomes the shortcomings of the biochemical evaluation method of skin damage and long waiting time, also the subjectivity and fuzziness of the visual evaluation, which achieves the non-destructive, rapid and quantitative evaluation of skin condition. It can be used for health assessment or classification of the skin condition, also can quantitatively evaluate the subtle improvement of skin condition after using skin care products or stage beauty.

  15. Quantitative Adverse Outcome Pathways and Their ...

    Science.gov (United States)

    A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course predictions that can support regulatory decision-making. Herein we describe several facets of qAOPs, including (a) motivation for development, (b) technical considerations, (c) evaluation of confidence, and (d) potential applications. The qAOP used as an illustrative example for these points describes the linkage between inhibition of cytochrome P450 19A aromatase (the MIE) and population-level decreases in the fathead minnow (FHM; Pimephales promelas). The qAOP consists of three linked computational models for the following: (a) the hypothalamic-pitutitary-gonadal axis in female FHMs, where aromatase inhibition decreases the conversion of testosterone to 17β-estradiol (E2), thereby reducing E2-dependent vitellogenin (VTG; egg yolk protein precursor) synthesis, (b) VTG-dependent egg development and spawning (fecundity), and (c) fecundity-dependent population trajectory. While development of the example qAOP was based on experiments with FHMs exposed to the aromatase inhibitor fadrozole, we also show how a toxic equivalence (TEQ) calculation allows use of the qAOP to predict effects of another, untested aromatase inhibitor, iprodione. While qAOP development can be resource-intensive, the quan

  16. Physics of Hard Spheres Experiment: Significant and Quantitative Findings Made

    Science.gov (United States)

    Doherty, Michael P.

    2000-01-01

    Direct examination of atomic interactions is difficult. One powerful approach to visualizing atomic interactions is to study near-index-matched colloidal dispersions of microscopic plastic spheres, which can be probed by visible light. Such spheres interact through hydrodynamic and Brownian forces, but they feel no direct force before an infinite repulsion at contact. Through the microgravity flight of the Physics of Hard Spheres Experiment (PHaSE), researchers have sought a more complete understanding of the entropically driven disorder-order transition in hard-sphere colloidal dispersions. The experiment was conceived by Professors Paul M. Chaikin and William B. Russel of Princeton University. Microgravity was required because, on Earth, index-matched colloidal dispersions often cannot be density matched, resulting in significant settling over the crystallization period. This settling makes them a poor model of the equilibrium atomic system, where the effect of gravity is truly negligible. For this purpose, a customized light-scattering instrument was designed, built, and flown by the NASA Glenn Research Center at Lewis Field on the space shuttle (shuttle missions STS 83 and STS 94). This instrument performed both static and dynamic light scattering, with sample oscillation for determining rheological properties. Scattered light from a 532- nm laser was recorded either by a 10-bit charge-coupled discharge (CCD) camera from a concentric screen covering angles of 0 to 60 or by sensitive avalanche photodiode detectors, which convert the photons into binary data from which two correlators compute autocorrelation functions. The sample cell was driven by a direct-current servomotor to allow sinusoidal oscillation for the measurement of rheological properties. Significant microgravity research findings include the observation of beautiful dendritic crystals, the crystallization of a "glassy phase" sample in microgravity that did not crystallize for over 1 year in 1g

  17. Quantitative shear wave ultrasound elastography: initial experience in solid breast masses.

    Science.gov (United States)

    Evans, Andrew; Whelehan, Patsy; Thomson, Kim; McLean, Denis; Brauer, Katrin; Purdie, Colin; Jordan, Lee; Baker, Lee; Thompson, Alastair

    2010-01-01

    Shear wave elastography is a new method of obtaining quantitative tissue elasticity data during breast ultrasound examinations. The aims of this study were (1) to determine the reproducibility of shear wave elastography (2) to correlate the elasticity values of a series of solid breast masses with histological findings and (3) to compare shear wave elastography with greyscale ultrasound for benign/malignant classification. Using the Aixplorer® ultrasound system (SuperSonic Imagine, Aix en Provence, France), 53 solid breast lesions were identified in 52 consecutive patients. Two orthogonal elastography images were obtained of each lesion. Observers noted the mean elasticity values in regions of interest (ROI) placed over the stiffest areas on the two elastography images and a mean value was calculated for each lesion. A sub-set of 15 patients had two elastography images obtained by an additional operator. Reproducibility of observations was assessed between (1) two observers analysing the same pair of images and (2) findings from two pairs of images of the same lesion taken by two different operators. All lesions were subjected to percutaneous biopsy. Elastography measurements were correlated with histology results. After preliminary experience with 10 patients a mean elasticity cut off value of 50 kilopascals (kPa) was selected for benign/malignant differentiation. Greyscale images were classified according to the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS). BI-RADS categories 1-3 were taken as benign while BI-RADS categories 4 and 5 were classified as malignant. Twenty-three benign lesions and 30 cancers were diagnosed on histology. Measurement of mean elasticity yielded an intraclass correlation coefficient of 0.99 for two observers assessing the same pairs of elastography images. Analysis of images taken by two independent operators gave an intraclass correlation coefficient of 0.80. Shear wave elastography versus

  18. Designing Experiments to Discriminate Families of Logic Models.

    Science.gov (United States)

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  19. Large scale FCI experiments in subassembly geometry. Test facility and model experiments

    International Nuclear Information System (INIS)

    Beutel, H.; Gast, K.

    A program is outlined for the study of fuel/coolant interaction under SNR conditions. The program consists of a) under water explosion experiments with full size models of the SNR-core, in which the fuel/coolant system is simulated by a pyrotechnic mixture. b) large scale fuel/coolant interaction experiments with up to 5kg of molten UO 2 interacting with liquid sodium at 300 deg C to 600 deg C in a highly instrumented test facility simulating an SNR subassembly. The experimental results will be compared to theoretical models under development at Karlsruhe. Commencement of the experiments is expected for the beginning of 1975

  20. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  1. Quantitative structure-activity relationship modeling of the toxicity of organothiophosphate pesticides to Daphnia magna and Cyprinus carpio

    NARCIS (Netherlands)

    Zvinavashe, E.; Du, T.; Griff, T.; Berg, van den J.H.J.; Soffers, A.E.M.F.; Vervoort, J.J.M.; Murk, A.J.; Rietjens, I.

    2009-01-01

    Within the REACH regulatory framework in the EU, quantitative structure-activity relationships (QSAR) models are expected to help reduce the number of animals used for experimental testing. The objective of this study was to develop QSAR models to describe the acute toxicity of organothiophosphate

  2. Physiological role of Kv1.3 channel in T lymphocyte cell investigated quantitatively by kinetic modeling.

    Directory of Open Access Journals (Sweden)

    Panpan Hou

    Full Text Available Kv1.3 channel is a delayed rectifier channel abundant in human T lymphocytes. Chronic inflammatory and autoimmune disorders lead to the over-expression of Kv1.3 in T cells. To quantitatively study the regulatory mechanism and physiological function of Kv1.3 in T cells, it is necessary to have a precise kinetic model of Kv1.3. In this study, we firstly established a kinetic model capable to precisely replicate all the kinetic features for Kv1.3 channels, and then constructed a T-cell model composed of ion channels including Ca2+-release activated calcium (CRAC channel, intermediate K+ (IK channel, TASK channel and Kv1.3 channel for quantitatively simulating the changes in membrane potentials and local Ca2+ signaling messengers during activation of T cells. Based on the experimental data from current-clamp recordings, we successfully demonstrated that Kv1.3 dominated the membrane potential of T cells to manipulate the Ca2+ influx via CRAC channel. Our results revealed that the deficient expression of Kv1.3 channel would cause the less Ca2+ signal, leading to the less efficiency in secretion. This was the first successful attempt to simulate membrane potential in non-excitable cells, which laid a solid basis for quantitatively studying the regulatory mechanism and physiological role of channels in non-excitable cells.

  3. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  4. Melanoma screening: Informing public health policy with quantitative modelling.

    Directory of Open Access Journals (Sweden)

    Stephen Gilmore

    Full Text Available Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in

  5. An overview of quantitative approaches in Gestalt perception.

    Science.gov (United States)

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Quantitative Decision Making Model for Carbon Reduction in Road Construction Projects Using Green Technologies

    Directory of Open Access Journals (Sweden)

    Woosik Jang

    2015-08-01

    Full Text Available Numerous countries have established policies for reducing greenhouse gas emissions and have suggested goals pertaining to these reductions. To reach the target reduction amounts, studies on the reduction of carbon emissions have been conducted with regard to all stages and processes in construction projects. According to a study on carbon emissions, the carbon emissions generated during the construction stage of road projects account for approximately 76 to 86% of the total carbon emissions, far exceeding the other stages, such as maintenance or demolition. Therefore, this study aims to develop a quantitative decision making model that supports the application of green technologies (GTs to reduce carbon emissions during the construction stage of road construction projects. First, the authors selected environmental soundness, economic feasibility and constructability as the key assessment indices for evaluating 20 GTs. Second, a fuzzy set/qualitative comparative analysis (FS/QCA was used to establish an objective decision-making model for the assessment of both the quantitative and qualitative characteristics of the key indices. To support the developed model, an expert survey was performed to assess the applicability of each GT from a practical perspective, which was verified with a case study using two additional GTs. The proposed model is expected to support practitioners in the application of suitable GTs to road projects and reduce carbon emissions, resulting in better decision making during road construction projects.

  7. Quantitative genetic models of sexual selection by male choice.

    Science.gov (United States)

    Nakahashi, Wataru

    2008-09-01

    There are many examples of male mate choice for female traits that tend to be associated with high fertility. I develop quantitative genetic models of a female trait and a male preference to show when such a male preference can evolve. I find that a disagreement between the fertility maximum and the viability maximum of the female trait is necessary for directional male preference (preference for extreme female trait values) to evolve. Moreover, when there is a shortage of available male partners or variance in male nongenetic quality, strong male preference can evolve. Furthermore, I also show that males evolve to exhibit a stronger preference for females that are more feminine (less resemblance to males) than the average female when there is a sexual dimorphism caused by fertility selection which acts only on females.

  8. Energy Education: The Quantitative Voice

    Science.gov (United States)

    Wolfson, Richard

    2010-02-01

    A serious study of energy use and its consequences has to be quantitative. It makes little sense to push your favorite renewable energy source if it can't provide enough energy to make a dent in humankind's prodigious energy consumption. Conversely, it makes no sense to dismiss alternatives---solar in particular---that supply Earth with energy at some 10,000 times our human energy consumption rate. But being quantitative---especially with nonscience students or the general public---is a delicate business. This talk draws on the speaker's experience presenting energy issues to diverse audiences through single lectures, entire courses, and a textbook. The emphasis is on developing a quick, ``back-of-the-envelope'' approach to quantitative understanding of energy issues. )

  9. Web Applications Vulnerability Management using a Quantitative Stochastic Risk Modeling Method

    Directory of Open Access Journals (Sweden)

    Sergiu SECHEL

    2017-01-01

    Full Text Available The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed de-bugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk charac-teristics (risk attributes and evaluate the results after 1.5 million rounds of Monte Carlo simu-lations.

  10. Quantitative model of the effects of contamination and space environment on in-flight aging of thermal coatings

    Science.gov (United States)

    Vanhove, Emilie; Roussel, Jean-François; Remaury, Stéphanie; Faye, Delphine; Guigue, Pascale

    2014-09-01

    The in-orbit aging of thermo-optical properties of thermal coatings critically impacts both spacecraft thermal balance and heating power consumption. Nevertheless, in-flight thermal coating aging is generally larger than the one measured on ground and the current knowledge does not allow making reliable predictions1. As a result, a large oversizing of thermal control systems is required. To address this issue, the Centre National d'Etudes Spatiales has developed a low-cost experiment, called THERME, which enables to monitor the in-flight time-evolution of the solar absorptivity of a large variety of coatings, including commonly used coatings and new materials by measuring their temperature. This experiment has been carried out on sunsynchronous spacecrafts for more than 27 years, allowing thus the generation of a very large set of telemetry measurements. The aim of this work was to develop a model able to semi-quantitatively reproduce these data with a restraint number of parameters. The underlying objectives were to better understand the contribution of the different involved phenomena and, later on, to predict the thermal coating aging at end of life. The physical processes modeled include contamination deposition, UV aging of both contamination layers and intrinsic material and atomic oxygen erosion. Efforts were particularly focused on the satellite leading wall as this face is exposed to the highest variations in environmental conditions during the solar cycle. The non-monotonous time-evolution of the solar absorptivity of thermal coatings is shown to be due to a succession of contamination and contaminant erosion by atomic oxygen phased with the solar cycle.

  11. Efficacy of the Frame and Hu mathematical model for the quantitative analysis of agents influencing growth of chick embryo fibroblasts

    International Nuclear Information System (INIS)

    Korohoda, K.; Czyz, J.

    1994-01-01

    The experiments on the effect of various sera and substratum surface area upon growth of chick embryo fibroblasts-like in secondary cultures are described and discussed on the grounds of a mathematical model for growth of anchorage-dependent cells proposed by Frame and Hu. The model and presented results demonstrate the mutual independence of the effects of agent influencing of rate of cell proliferation (i.e. accelerating or retarding growth) and the agents that modify the limitation of cell proliferation (i.e. maximum cell density at confluence). The model proposed by Frame and Hu due to its relative simplicity offers and easy mode of description and quantitative evaluation of experiments concerning cell growth regulation. It is shown that various sera added at constant concentration significantly modify the rate of cell proliferation with little effect upon the maximum cell density attainable. The cells grew much more slowly in the presence of calf serum than in the presence of chick serum and the addition of iron and zinc complexes to calf serum significantly accelerated cell growth. An increase in the substratum surface area by the addition of glass wool to culture vessels significantly increased cell density per constant volume of medium even when retardation of growth was observed. The results presented point to the need of direct cell counting for estimation of cell growth curves and discussion of effects of agents influencing parameters characterizing cell proliferation. (author). 34 refs, 5 figs, 2 tabs

  12. Use of a plant level logic model for quantitative assessment of systems interactions

    International Nuclear Information System (INIS)

    Chu, B.B.; Rees, D.C.; Kripps, L.P.; Hunt, R.N.; Bradley, M.

    1985-01-01

    The Electric Power Research Institute (EPRI) has sponsored a research program to investigate methods for identifying systems interactions (SIs) and for the evaluation of their importance. Phase 1 of the EPRI research project focused on the evaluation of methods for identification of SIs. Major results of the Phase 1 activities are the documentation of four different methodologies for identification of potential SIs and development of guidelines for performing an effective plant walkdown in support of an SI analysis. Phase II of the project, currently being performed, is utilizing a plant level logic model of a pressurized water reactor (PWR) to determine the quantitative importance of identified SIs. In Phase II, previously reported events involving interactions between systems were screened and selected on the basis of their relevance to the Baltimore Gas and Electric (BGandE) Calvert Cliffs Nuclear Power Plant design and perceived potential safety significance. Selected events were then incorporated into the BGandE plant level GO logic model. The model is being exercised to calculate the relative importance of these events. Five previously identified event scenarios, extracted from licensee event reports (LERs) are being evaluated during the course of the study. A key feature of the approach being used in Phase II is the use of a logic model in a manner to effectively evaluate the impact of events on the system level and the plant level for the mitigation of transients. Preliminary study results indicate that the developed methodology can be a viable and effective means for determining the quantitative significance of SIs

  13. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    Science.gov (United States)

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  14. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  15. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  16. TF insert experiment log book. 2nd Experiment of CS model coil

    International Nuclear Information System (INIS)

    Sugimoto, Makoto; Isono, Takaaki; Matsui, Kunihiro

    2001-12-01

    The cool down of CS model coil and TF insert was started on August 20, 2001. It took almost one month and immediately started coil charge since September 17, 2001. The charge test of TF insert and CS model coil was completed on October 19, 2001. In this campaign, total shot numbers were 88 and the size of the data file in the DAS (Data Acquisition System) was about 4 GB. This report is a database that consists of the log list and the log sheets of every shot. This is an experiment logbook for 2nd experiment of CS model coil and TF insert for charge test. (author)

  17. Modelling of laboratory high-pressure infiltration experiments

    International Nuclear Information System (INIS)

    Smith, P.A.

    1992-02-01

    This report describes the modelling of break-through curves from a series of two-tracer dynamic infiltration experiments, which are intended to complement larger scale experiments at the Nagra Grimsel Test Site. The tracers are 82 Br, which is expected to be non-sorbing, and 24 Na, which is weakly sorbing. The 24 Na concentration is well below the natural Na concentration in the infiltration fluid, so that sorption on the rock is governed by isotopic exchange, exhibiting a linear isotherm. The rock specimens are sub-samples (cores) of granodiorite from the Grimsel Test Site, each containing a distinct shear zone. Best-fits to the break-through curves using single-porosity and dual-porosity transport models are compared and several physical parameters are extracted. It is shown that the dual-porosity model is required in order to reproduce the tailing part of the break-through curves for the non-sorbing tracer. The single-porosity model is sufficient to reproduce the break-through curves for the sorbing tracer within the estimated experimental errors. Extracted K d values are shown to agree well with a field rock-water interaction experiment and in situ migration experiments. Static, laboratory batch-sorption experiments give a larger K d , but this difference could be explained by the larger surface area available for sorption in the artificially crushed samples used in the laboratory and by a slightly different water chemistry. (author) 13 figs., tabs., 19 refs

  18. Correlations of 3T DCE-MRI Quantitative Parameters with Microvessel Density in a Human-Colorectal-Cancer Xenograft Mouse Model

    International Nuclear Information System (INIS)

    Ahn, Sung Jun; An, Chan Sik; Koom, Woong Sub; Song, Ho Taek; Suh, Jin Suck

    2011-01-01

    To investigate the correlation between quantitative dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) parameters and microvascular density (MVD) in a human-colon-cancer xenograft mouse model using 3 Tesla MRI. A human-colon-cancer xenograft model was produced by subcutaneously inoculating 1 X 106 DLD-1 human-colon-cancer cells into the right hind limbs of 10 mice. The tumors were allowed to grow for two weeks and then assessed using MRI. DCE-MRI was performed by tail vein injection of 0.3 mmol/kg of gadolinium. A region of interest (ROI) was drawn at the midpoints along the z-axes of the tumors, and a Tofts model analysis was performed. The quantitative parameters (Ktrans, Kep and Ve) from the whole transverse ROI and the hotspot ROI of the tumor were calculated. Immunohistochemical microvessel staining was performed and analyzed according to Weidner's criteria at the corresponding MRI sections. Additional Hematoxylin and Eosin staining was performed to evaluate tumor necrosis. The Mann-Whitney test and Spearman's rho correlation analysis were performed to prove the existence of a correlation between the quantitative parameters, necrosis, and MVD. Whole transverse ROI of the tumor showed no significant relationship between the MVD values and quantitative DCE-MRI parameters. In the hotspot ROI, there was a difference in MVD between low and high group of Ktrans and Kep that had marginally statistical significance (ps = 0.06 and 0.07, respectively). Also, Ktrans and Kep were found to have an inverse relationship with MVD (r -0.61, p = 0.06 in Ktrans; r = -0.60, p = 0.07 in Kep). Quantitative analysis of T1-weighted DCE-MRI using hotspot ROI may provide a better histologic match than whole transverse section ROI. Within the hotspots, Ktrans and Kep tend to have a reverse correlation with MVD in this colon cancer mouse model.

  19. Correlations of 3T DCE-MRI Quantitative Parameters with Microvessel Density in a Human-Colorectal-Cancer Xenograft Mouse Model

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sung Jun; An, Chan Sik; Koom, Woong Sub; Song, Ho Taek; Suh, Jin Suck [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2011-11-15

    To investigate the correlation between quantitative dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) parameters and microvascular density (MVD) in a human-colon-cancer xenograft mouse model using 3 Tesla MRI. A human-colon-cancer xenograft model was produced by subcutaneously inoculating 1 X 106 DLD-1 human-colon-cancer cells into the right hind limbs of 10 mice. The tumors were allowed to grow for two weeks and then assessed using MRI. DCE-MRI was performed by tail vein injection of 0.3 mmol/kg of gadolinium. A region of interest (ROI) was drawn at the midpoints along the z-axes of the tumors, and a Tofts model analysis was performed. The quantitative parameters (Ktrans, Kep and Ve) from the whole transverse ROI and the hotspot ROI of the tumor were calculated. Immunohistochemical microvessel staining was performed and analyzed according to Weidner's criteria at the corresponding MRI sections. Additional Hematoxylin and Eosin staining was performed to evaluate tumor necrosis. The Mann-Whitney test and Spearman's rho correlation analysis were performed to prove the existence of a correlation between the quantitative parameters, necrosis, and MVD. Whole transverse ROI of the tumor showed no significant relationship between the MVD values and quantitative DCE-MRI parameters. In the hotspot ROI, there was a difference in MVD between low and high group of Ktrans and Kep that had marginally statistical significance (ps = 0.06 and 0.07, respectively). Also, Ktrans and Kep were found to have an inverse relationship with MVD (r -0.61, p = 0.06 in Ktrans; r = -0.60, p = 0.07 in Kep). Quantitative analysis of T1-weighted DCE-MRI using hotspot ROI may provide a better histologic match than whole transverse section ROI. Within the hotspots, Ktrans and Kep tend to have a reverse correlation with MVD in this colon cancer mouse model.

  20. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  1. Quantitative Comparison of Effects of Dofetilide, Sotalol, Quinidine, and Verapamil between Human Ex vivo Trabeculae and In silico Ventricular Models Incorporating Inter-Individual Action Potential Variability

    Directory of Open Access Journals (Sweden)

    Oliver J. Britton

    2017-08-01

    Full Text Available Background:In silico modeling could soon become a mainstream method of pro-arrhythmic risk assessment in drug development. However, a lack of human-specific data and appropriate modeling techniques has previously prevented quantitative comparison of drug effects between in silico models and recordings from human cardiac preparations. Here, we directly compare changes in repolarization biomarkers caused by dofetilide, dl-sotalol, quinidine, and verapamil, between in silico populations of human ventricular cell models and ex vivo human ventricular trabeculae.Methods and Results:Ex vivo recordings from human ventricular trabeculae in control conditions were used to develop populations of in silico human ventricular cell models that integrated intra- and inter-individual variability in action potential (AP biomarker values. Models were based on the O'Hara-Rudy ventricular cardiomyocyte model, but integrated experimental AP variability through variation in underlying ionic conductances. Changes to AP duration, triangulation and early after-depolarization occurrence from application of the four drugs at multiple concentrations and pacing frequencies were compared between simulations and experiments. To assess the impact of variability in IC50 measurements, and the effects of including state-dependent drug binding dynamics, each drug simulation was repeated with two different IC50 datasets, and with both the original O'Hara-Rudy hERG model and a recently published state-dependent model of hERG and hERG block. For the selective hERG blockers dofetilide and sotalol, simulation predictions of AP prolongation and repolarization abnormality occurrence showed overall good agreement with experiments. However, for multichannel blockers quinidine and verapamil, simulations were not in agreement with experiments across all IC50 datasets and IKr block models tested. Quinidine simulations resulted in overprolonged APs and high incidence of repolarization

  2. Quantitative Reasoning in Environmental Science: A Learning Progression

    Science.gov (United States)

    Mayes, Robert Lee; Forrester, Jennifer Harris; Christus, Jennifer Schuttlefield; Peterson, Franziska Isabel; Bonilla, Rachel; Yestness, Nissa

    2014-01-01

    The ability of middle and high school students to reason quantitatively within the context of environmental science was investigated. A quantitative reasoning (QR) learning progression was created with three progress variables: quantification act, quantitative interpretation, and quantitative modeling. An iterative research design was used as it…

  3. Quantitative modeling of the reaction/diffusion kinetics of two-chemistry photopolymers

    Science.gov (United States)

    Kowalski, Benjamin Andrew

    Optically driven diffusion in photopolymers is an appealing material platform for a broad range of applications, in which the recorded refractive index patterns serve either as images (e.g. data storage, display holography) or as optical elements (e.g. custom GRIN components, integrated optical devices). A quantitative understanding of the reaction/diffusion kinetics is difficult to obtain directly, but is nevertheless necessary in order to fully exploit the wide array of design freedoms in these materials. A general strategy for characterizing these kinetics is proposed, in which key processes are decoupled and independently measured. This strategy enables prediction of a material's potential refractive index change, solely on the basis of its chemical components. The degree to which a material does not reach this potential reveals the fraction of monomer that has participated in unwanted reactions, reducing spatial resolution and dynamic range. This approach is demonstrated for a model material similar to commercial media, achieving quantitative predictions of index response over three orders of exposure dose (~1 to ~103 mJ cm-2) and three orders of feature size (0.35 to 500 microns). The resulting insights enable guided, rational design of new material formulations with demonstrated performance improvement.

  4. The Comparison of Distributed P2P Trust Models Based on Quantitative Parameters in the File Downloading Scenarios

    Directory of Open Access Journals (Sweden)

    Jingpei Wang

    2016-01-01

    Full Text Available Varied P2P trust models have been proposed recently; it is necessary to develop an effective method to evaluate these trust models to resolve the commonalities (guiding the newly generated trust models in theory and individuality (assisting a decision maker in choosing an optimal trust model to implement in specific context issues. A new method for analyzing and comparing P2P trust models based on hierarchical parameters quantization in the file downloading scenarios is proposed in this paper. Several parameters are extracted from the functional attributes and quality feature of trust relationship, as well as requirements from the specific network context and the evaluators. Several distributed P2P trust models are analyzed quantitatively with extracted parameters modeled into a hierarchical model. The fuzzy inferring method is applied to the hierarchical modeling of parameters to fuse the evaluated values of the candidate trust models, and then the relative optimal one is selected based on the sorted overall quantitative values. Finally, analyses and simulation are performed. The results show that the proposed method is reasonable and effective compared with the previous algorithms.

  5. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe...

  6. A quantitative microbial risk assessment model for Listeria monocytogenes in RTE sandwiches

    DEFF Research Database (Denmark)

    Tirloni, E.; Stella, S.; de Knegt, Leonardo

    2018-01-01

    within each serving. Then, two dose-response models were alternatively applied: the first used a fixed r value for each of the three population groups, while the second considered a variable r value (lognormal distribution), taking into account the variability in strain virulence and different host...... subpopulations susceptibility. The stochastic model predicted zero cases for total population for both the substrates by using the fixed r approach, while 3 cases were expected when a higher variability (in virulence and susceptibility) was considered in the model; the number of cases increased to 45......A Quantitative Microbial Risk Assessment (QMRA) was performed to estimate the expected number of listeriosis cases due to the consumption, on the last day of shelf life, of 20 000 servings of multi-ingredient sandwiches produced by a medium scale food producer in Italy, by different population...

  7. Numerical simulations of counter-current two-phase flow experiments in a PWR hot leg model using an interfacial area density model

    Energy Technology Data Exchange (ETDEWEB)

    Hoehne, Thomas, E-mail: t.hoehne@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Safety Research, P.O. Box 510 119, D-01314 Dresden (Germany); Deendarlianto,; Lucas, Dirk [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Safety Research, P.O. Box 510 119, D-01314 Dresden (Germany)

    2011-10-15

    In order to improve the understanding of counter-current two-phase flows and to validate new physical models, CFD simulations of 1/3rd scale model of the hot leg of a German Konvoi PWR with rectangular cross section was performed. Selected counter-current flow limitation (CCFL) experiments at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) were calculated with ANSYS CFX 12.1 using the multi-fluid Euler-Euler modeling approach. The transient calculations were carried out using a gas/liquid inhomogeneous multiphase flow model coupled with a k-{omega} turbulence model for each phase. In the simulation, the surface drag was approached by a new correlation inside the Algebraic Interfacial Area Density (AIAD) model. The AIAD model allows the detection of the morphological form of the two phase flow and the corresponding switching via a blending function of each correlation from one object pair to another. As a result this model can distinguish between bubbles, droplets and the free surface using the local liquid phase volume fraction value. A comparison with the high-speed video observations shows a good qualitative agreement. The results indicated that quantitative agreement of the CCFL characteristics between calculation and experimental data was obtained. The goal is to provide an easy usable AIAD framework for all Code users, with the possibility of the implementation of their own correlations.

  8. Numerical simulations of counter-current two-phase flow experiments in a PWR hot leg model using an interfacial area density model

    Energy Technology Data Exchange (ETDEWEB)

    Hohne, T.; Deendarlianto; Vallee, C.; Lucas, D.; Beyer, M., E-mail: t.hoehne@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Inst. of Safety Research, Dresden (Germany)

    2011-07-01

    In order to improve the understanding of counter-current two-phase flows and to validate new physical models, CFD simulations of 1/3rd scale model of the hot leg of a German Konvoi PWR with rectangular cross section was performed. Selected counter-current flow limitation (CCFL) experiments at the Helmholtz-Zentrum Dresden- Rossendorf (HZDR) were calculated with ANSYS CFX 12.1 using the multi-fluid Euler-Euler modeling approach. The transient calculations were carried out using a gas/liquid inhomogeneous multiphase flow model coupled with a SST turbulence model for each phase. In the simulation, the surface drag was approached by a new correlation inside the Algebraic Interfacial Area Density (AIAD) model. The AIAD model allows the detection of the morphological form of the two phase flow and the corresponding switching via a blending function of each correlation from one object pair to another. As a result this model can distinguish between bubbles, droplets and the free surface using the local liquid phase volume fraction value. A comparison with the high-speed video observations shows a good qualitative agreement. The results indicated that quantitative agreement of the CCFL characteristics between calculation and experimental data was obtained. The goal is to provide an easy usable AIAD framework for all ANSYS CFX users, with the possibility of the implementation of their own correlations. (author)

  9. Ecology-centered experiences among children and adolescents: A qualitative and quantitative analysis

    Science.gov (United States)

    Orton, Judy

    living things) and environmental responsibility support (i.e., support through the availability of environmentally responsible models) predict EAB? As predicted, results showed that ecology-centered experiences predicted EAB; yet, when environmental responsibility support was taken into consideration, ecology-centered experiences no longer predicted EAB. These findings suggested environmental responsibility support was a stronger predictor than ecology-centered experiences. Finally, do age and gender predict EAB? Consistent with previous research (e.g., Alp, Ertepiner, Tekkaya, & Yilmaz, 2006), age and gender significantly predicted EAB.

  10. Models of conflict and cooperation

    CERN Document Server

    Gillman, Rick

    2009-01-01

    Models of Conflict and Cooperation is a comprehensive, introductory, game theory text for general undergraduate students. As a textbook, it provides a new and distinctive experience for students working to become quantitatively literate. Each chapter begins with a "dialogue" that models quantitative discourse while previewing the topics presented in the rest of the chapter. Subsequent sections develop the key ideas starting with basic models and ending with deep concepts and results. Throughout all of the sections, attention is given to promoting student engagement with the material through re

  11. Modeling the Nab Experiment Electronics in SPICE

    Science.gov (United States)

    Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration

    2017-09-01

    The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.

  12. A novel quantitative model of cell cycle progression based on cyclin-dependent kinases activity and population balances.

    Science.gov (United States)

    Pisu, Massimo; Concas, Alessandro; Cao, Giacomo

    2015-04-01

    Cell cycle regulates proliferative cell capacity under normal or pathologic conditions, and in general it governs all in vivo/in vitro cell growth and proliferation processes. Mathematical simulation by means of reliable and predictive models represents an important tool to interpret experiment results, to facilitate the definition of the optimal operating conditions for in vitro cultivation, or to predict the effect of a specific drug in normal/pathologic mammalian cells. Along these lines, a novel model of cell cycle progression is proposed in this work. Specifically, it is based on a population balance (PB) approach that allows one to quantitatively describe cell cycle progression through the different phases experienced by each cell of the entire population during its own life. The transition between two consecutive cell cycle phases is simulated by taking advantage of the biochemical kinetic model developed by Gérard and Goldbeter (2009) which involves cyclin-dependent kinases (CDKs) whose regulation is achieved through a variety of mechanisms that include association with cyclins and protein inhibitors, phosphorylation-dephosphorylation, and cyclin synthesis or degradation. This biochemical model properly describes the entire cell cycle of mammalian cells by maintaining a sufficient level of detail useful to identify check point for transition and to estimate phase duration required by PB. Specific examples are discussed to illustrate the ability of the proposed model to simulate the effect of drugs for in vitro trials of interest in oncology, regenerative medicine and tissue engineering. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. The Impact of Situation-Based Learning to Students’ Quantitative Literacy

    Science.gov (United States)

    Latifah, T.; Cahya, E.; Suhendra

    2017-09-01

    Nowadays, the usage of quantities can be seen almost everywhere. There has been an increase of quantitative thinking, such as quantitative reasoning and quantitative literacy, within the context of daily life. However, many people today are still not fully equipped with the knowledge of quantitative thinking. There are still a lot of individuals not having enough quantitative skills to perform well within today’s society. Based on this issue, the research aims to improve students’ quantitative literacy in junior high school. The qualitative analysis of written student work and video observations during the experiment reveal that the impact of situation-based learning affects students’ quantitative literacy.

  14. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    International Nuclear Information System (INIS)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-01-01

    generally not be applicable to other broad classes of problems, we believe that this approach (if applied over time to many types of problems) offers greater potential for long-term progress than attempts to discover a universal solution or theory. We are developing and testing this approach using porous media and model reaction systems that can be both experimentally measured and quantitatively simulated at the pore scale, specifically biofilm development and metal reduction in granular porous media. The general approach we are using in this research follows the following steps: (1) Perform pore-scale characterization of pore geometry and biofilm development in selected porous media systems. (2) Simulate selected reactive transport processes at the pore scale in experimentally measured pore geometries. (3) Validate pore-scale models against laboratory-scale experiments. (4) Perform upscaling to derive continuum-scale (local darcy scale) process descriptions and effective parameters. (5) Use upscaled models and parameters to simulate reactive transport at the continuum scale in a macroscopically heterogeneous medium

  15. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    generally not be applicable to other broad classes of problems, we believe that this approach (if applied over time to many types of problems) offers greater potential for long-term progress than attempts to discover a universal solution or theory. We are developing and testing this approach using porous media and model reaction systems that can be both experimentally measured and quantitatively simulated at the pore scale, specifically biofilm development and metal reduction in granular porous media. The general approach we are using in this research follows the following steps: (1) Perform pore-scale characterization of pore geometry and biofilm development in selected porous media systems. (2) Simulate selected reactive transport processes at the pore scale in experimentally measured pore geometries. (3) Validate pore-scale models against laboratory-scale experiments. (4) Perform upscaling to derive continuum-scale (local darcy scale) process descriptions and effective parameters. (5) Use upscaled models and parameters to simulate reactive transport at the continuum scale in a macroscopically heterogeneous medium.

  16. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  17. First principles pharmacokinetic modeling: A quantitative study on Cyclosporin

    DEFF Research Database (Denmark)

    Mošat', Andrej; Lueshen, Eric; Heitzig, Martina

    2013-01-01

    renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...

  18. Experiments and modeling of single plastic particle conversion in suspension

    DEFF Research Database (Denmark)

    Nakhaei, Mohammadhadi; Wu, Hao; Grévain, Damien

    2018-01-01

    Conversion of single high density polyethylene (PE) particles has been studied by experiments and modeling. The experiments were carried out in a single particle combustor for five different shapes and masses of particles at temperature conditions of 900 and 1100°C. Each experiment was recorded...... against the experiments as well as literature data. Furthermore, a simplified isothermal model appropriate for CFD applications was developed, in order to model the combustion of plastic particles in cement calciners. By comparing predictions with the isothermal and the non–isothermal models under typical...

  19. Grimsel Test Site: modelling radionuclide migration field experiments

    International Nuclear Information System (INIS)

    Heer, W.; Hadermann, J.

    1994-09-01

    In the migration field experiments at Nagra's Grimsel Test Site, the processes of nuclide transport through a well defined fractured shear-zone in crystalline rock are being investigated. For these experiments, model calculations have been performed to obtain indications on validity and limitation of the model applied and the data deduced under field conditions. The model consists of a hydrological part, where the dipole flow fields of the experiments are determined, and a nuclide transport part, where the flow field driven nuclide propagation through the shear-zone is calculated. In addition to the description of the model, analytical expressions are given to guide the interpretation of experimental results. From the analysis of experimental breakthrough curves for conservative uranine, weakly sorbing sodium and more stronger sorbing strontium tracers, the following main results can be derived: i) The model is able to represent the breakthrough curves of the migration field experiments to a high degree of accuracy, ii) The process of matrix diffusion is manifest through the tails of the breakthrough curves decreasing with time as t -3/2 and through the special shape of the tail ends, both confirmed by the experiments, iii) For nuclide sorbing rapidly, not too strongly, linearly, and exhibiting a reversible cation exchange process on fault gouge, the laboratory sorption coefficient can reasonably well be extrapolated to field conditions. Adequate care in selecting and preparing the rock samples is, of course, a necessary requirement. Using the parameters determined in the previous analysis, predictions are made for experiments in a smaller an faster flow field. For conservative uranine and weakly sorbing sodium, the agreement of predicted and measured breakthrough curves is good, for the more stronger sorbing strontium reasonable, confirming that the model describes the main nuclide transport processes adequately. (author) figs., tabs., 29 refs

  20. The Next Frontier: Quantitative Biochemistry in Living Cells.

    Science.gov (United States)

    Honigmann, Alf; Nadler, André

    2018-01-09

    Researchers striving to convert biology into an exact science foremost rely on structural biology and biochemical reconstitution approaches to obtain quantitative data. However, cell biological research is moving at an ever-accelerating speed into areas where these approaches lose much of their edge. Intrinsically unstructured proteins and biochemical interaction networks composed of interchangeable, multivalent, and unspecific interactions pose unique challenges to quantitative biology, as do processes that occur in discrete cellular microenvironments. Here we argue that a conceptual change in our way of conducting biochemical experiments is required to take on these new challenges. We propose that reconstitution of cellular processes in vitro should be much more focused on mimicking the cellular environment in vivo, an approach that requires detailed knowledge of the material properties of cellular compartments, essentially requiring a material science of the cell. In a similar vein, we suggest that quantitative biochemical experiments in vitro should be accompanied by corresponding experiments in vivo, as many newly relevant cellular processes are highly context-dependent. In essence, this constitutes a call for chemical biologists to convert their discipline from a proof-of-principle science to an area that could rightfully be called quantitative biochemistry in living cells. In this essay, we discuss novel techniques and experimental strategies with regard to their potential to fulfill such ambitious aims.

  1. [Influence of Spectral Pre-Processing on PLS Quantitative Model of Detecting Cu in Navel Orange by LIBS].

    Science.gov (United States)

    Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui

    2015-05-01

    Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.

  2. Quantitative analysis of surface deformation and ductile flow in complex analogue geodynamic models based on PIV method.

    Science.gov (United States)

    Krýza, Ondřej; Lexa, Ondrej; Závada, Prokop; Schulmann, Karel; Gapais, Denis; Cosgrove, John

    2017-04-01

    Recently, a PIV (particle image velocimetry) analysis method is optical method abundantly used in many technical branches where material flow visualization and quantification is important. Typical examples are studies of liquid flow through complex channel system, gas spreading or combustion problematics. In our current research we used this method for investigation of two types of complex analogue geodynamic and tectonic experiments. First class of experiments is aimed to model large-scale oroclinal buckling as an analogue of late Paleozoic to early Mesozoic evolution of Central Asian Orogenic Belt (CAOB) resulting from nortward drift of the North-China craton towards the Siberian craton. Here we studied relationship between lower crustal and lithospheric mantle flows and upper crustal deformation respectively. A second class of experiments is focused to more general study of a lower crustal flow in indentation systems that represent a major component of some large hot orogens (e.g. Bohemian massif). The most of simulations in both cases shows a strong dependency of a brittle structures shape, that are situated in upper crust, on folding style of a middle and lower ductile layers which is influenced by rheological, geometrical and thermal conditions of different parts across shortened domain. The purpose of PIV application is to quantify material redistribution in critical domains of the model. The derivation of flow direction and calculation of strain-rate and total displacement field in analogue experiments is generally difficult and time-expensive or often performed only on a base of visual evaluations. PIV method operates with set of images, where small tracer particles are seeded within modeled domain and are assumed to faithfully follow the material flow. On base of pixel coordinates estimation the material displacement field, velocity field, strain-rate, vorticity, tortuosity etc. are calculated. In our experiments we used velocity field divergence to

  3. Current experience and a new modeling on water hammer due to steam condensation in PWR secondary system

    International Nuclear Information System (INIS)

    Kawanishi, K.; Kasahara, J.; Ueno, T.; Suzuta, T.

    1998-01-01

    There have been possibilities to occur water hammer in pipelines of turbine system for nuclear or fossil fuel power plants. According to the NUREG report, approximately 150 events have been reported since 1969, we also have an experience recently. Water hammer occurs due to sudden steam condensation with pressure pulse. This kind of pressure pulses has been made by alternative producing and condensing of steam slug in the pipe and its frequency relates subcooling and pipe structures. This paper presents our current experience on water hammer with some experimental studies. The present experiment has been performed to obtain the data base for evaluating the pressure pulses. The test pipe was horizontal tubes with dead end connected to vertical tube which simulating drain line in PWR secondary system. The main results are shown as follows; Magnitude of pressure pulse depends drain velocity and initial subcooling. Pipe structure effects on the frequency and continual time of water hammer phenomenon. A new modeling for quantitative explanation of the phenomena is also presented

  4. On the quantitativeness of EDS STEM

    Energy Technology Data Exchange (ETDEWEB)

    Lugg, N.R. [Institute of Engineering Innovation, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan); Kothleitner, G. [Institute for Electron Microscopy and Nanoanalysis, Graz University of Technology, Steyrergasse 17, 8010 Graz (Austria); Centre for Electron Microscopy, Steyrergasse 17, 8010 Graz (Austria); Shibata, N.; Ikuhara, Y. [Institute of Engineering Innovation, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-04-15

    Chemical mapping using energy dispersive X-ray spectroscopy (EDS) in scanning transmission electron microscopy (STEM) has recently shown to be a powerful technique in analyzing the elemental identity and location of atomic columns in materials at atomic resolution. However, most applications of EDS STEM have been used only to qualitatively map whether elements are present at specific sites. Obtaining calibrated EDS STEM maps so that they are on an absolute scale is a difficult task and even if one achieves this, extracting quantitative information about the specimen – such as the number or density of atoms under the probe – adds yet another layer of complexity to the analysis due to the multiple elastic and inelastic scattering of the electron probe. Quantitative information may be obtained by comparing calibrated EDS STEM with theoretical simulations, but in this case a model of the structure must be assumed a priori. Here we first theoretically explore how exactly elastic and thermal scattering of the probe confounds the quantitative information one is able to extract about the specimen from an EDS STEM map. We then show using simulation how tilting the specimen (or incident probe) can reduce the effects of scattering and how it can provide quantitative information about the specimen. We then discuss drawbacks of this method – such as the loss of atomic resolution along the tilt direction – but follow this with a possible remedy: precession averaged EDS STEM mapping. - Highlights: • Signal obtained in EDS STEM maps (of STO) compared to non-channelling signal. • Deviation from non-channelling signal occurs in on-axis experiments. • Tilting specimen: signal close to non-channelling case but atomic resolution is lost. • Tilt-precession series: non-channelling signal and atomic-resolution features obtained. • Associated issues are discussed.

  5. Quantitative analysis of CT brain images: a statistical model incorporating partial volume and beam hardening effects

    International Nuclear Information System (INIS)

    McLoughlin, R.F.; Ryan, M.V.; Heuston, P.M.; McCoy, C.T.; Masterson, J.B.

    1992-01-01

    The purpose of this study was to construct and evaluate a statistical model for the quantitative analysis of computed tomographic brain images. Data were derived from standard sections in 34 normal studies. A model representing the intercranial pure tissue and partial volume areas, with allowance for beam hardening, was developed. The average percentage error in estimation of areas, derived from phantom tests using the model, was 28.47%. We conclude that our model is not sufficiently accurate to be of clinical use, even though allowance was made for partial volume and beam hardening effects. (author)

  6. Quantitative analysis of boron by neutron radiography

    International Nuclear Information System (INIS)

    Bayuelken, A.; Boeck, H.; Schachner, H.; Buchberger, T.

    1990-01-01

    The quantitative determination of boron in ores is a long process with chemical analysis techniques. As nuclear techniques like X-ray fluorescence and activation analysis are not applicable for boron, only the neutron radiography technique, using the high neutron absorption cross section of this element, can be applied for quantitative determinations. This paper describes preliminary tests and calibration experiments carried out at a 250 kW TRIGA reactor. (orig.) [de

  7. Seismic-refraction field experiments on Galapagos Islands: A quantitative tool for hydrogeology

    Science.gov (United States)

    Adelinet, M.; Domínguez, C.; Fortin, J.; Violette, S.

    2018-01-01

    Due to their complex structure and the difficulty of collecting data, the hydrogeology of basaltic islands remains misunderstood, and the Galapagos islands are not an exception. Geophysics allows the possibility to describe the subsurface of these islands and to quantify the hydrodynamical properties of its ground layers, which can be useful to build robust hydrogeological models. In this paper, we present seismic refraction data acquired on Santa Cruz and San Cristobal, the two main inhabited islands of Galapagos. We investigated sites with several hydrogeological contexts, located at different altitudes and at different distances to the coast. At each site, a 2D P-wave velocity profile is built, highlighting unsaturated and saturated volcanic layers. At the coastal sites, seawater intrusion is identified and basal aquifer is characterized in terms of variations in compressional sound wave velocities, according to saturation state. At highlands sites, the limits between soils and lava flows are identified. On San Cristobal Island, the 2D velocity profile obtained on a mid-slope site (altitude 150 m), indicates the presence of a near surface freshwater aquifer, which is in agreement with previous geophysical studies and the hydrogeological conceptual model developed for this island. The originality of our paper is the use of velocity data to compute field porosity based on poroelasticity theory and the Biot-Gassmann equations. Given that porosity is a key parameter in quantitative hydrogeological models, it is a step forward to a better understanding of shallow fluid flows within a complex structure, such as Galapagos volcanoes.

  8. Reflections on the Introduction of Quantitative Assessment in Persuasive Writing Classes

    Directory of Open Access Journals (Sweden)

    Paul H. Grawe

    2014-01-01

    Full Text Available If quantitative reasoning is to be a legitimate part of composition curricula, it must be seen as a valuable tool for composition instructors to use in exploring their own subject. Composition instructors must see the relevance of QR not merely to their students in other subject areas but also directly in their literary and rhetorical studies and careers. Here we reflect on a highly successful program of using quantitative techniques in teaching advanced levels of professional rhetoric, namely persuasive speech and writing. We recount our 15-year experience of running an in-class, empirical and progressive experiment in group negotiations, the Legislative Simulation (LS. The LS provided statistically significant results, some eye-opening, reported in various publications, but here our reflections concern what such an experiment tells us about opportunities and challenges of using quantitative techniques for the improvement of teaching rhetoric in and for itself. It is clear from our experience that QR takes on a somewhat different appearance within the humanities requiring adjustments in pedagogy and expectations. None of the challenges, however, are insuperable, and the rewards for the discipline as well as for a quantitatively competent university are very great.

  9. Integrating Qualitative and Quantitative Methods in Participatory Modeling to Elicit Behavioral Drivers in Environmental Dilemmas: the Case of Air Pollution in Talca, Chile.

    Science.gov (United States)

    Meinherz, Franziska; Videira, Nuno

    2018-04-10

    The aim of this paper is to contribute to the exploration of environmental modeling methods based on the elicitation of stakeholders' mental models. This aim is motivated by the necessity to understand the dilemmas and behavioral rationales of individuals for supporting the management of environmental problems. The methodology developed for this paper integrates qualitative and quantitative methods by deploying focus groups for the elicitation of the behavioral rationales of the target population, and grounded theory to code the information gained in the focus groups and to guide the development of a dynamic simulation model. The approach is applied to a case of urban air pollution caused by residential heating with wood in central Chile. The results show how the households' behavior interrelates with the governmental management strategies and provide valuable and novel insights into potential challenges to the implementation of policies to manage the local air pollution problem. The experience further shows that the developed participatory modeling approach allows to overcome some of the issues currently encountered in the elicitation of individuals' behavioral rationales and in the quantification of qualitative information.

  10. Quantitative risk assessment of continuous liquid spill fires based on spread and burning behaviours

    DEFF Research Database (Denmark)

    Zhao, Jinlong; Huang, Hong; Li, Yuntao

    2017-01-01

    Spill fires usually occur during the storage and transportation of hazardous materials, posing a threat to the people and environment in their immediate proximity. In this paper, a classical Quantitative Risk Assessment (QRA) method is used to assess the risk of spill fires. In this method......, the maximum spread area and the steady burning area are introduced as parameters to clearly assess the range of influence of the spill fire. In the calculations, a modified spread model that takes into consideration the burning rate variation is established to calculate the maximum spread area. Furthermore......, large-scale experiments of spill fires on water and a glass sheet were conducted to verify the accuracy and application of the model. The results show that the procedure we developed can be used to quantitatively calculate the risk associated with a continuous spill fire....

  11. Transient deformation of a droplet near a microfluidic constriction: A quantitative analysis

    Science.gov (United States)

    Trégouët, Corentin; Salez, Thomas; Monteux, Cécile; Reyssat, Mathilde

    2018-05-01

    We report on experiments that consist of deforming a collection of monodisperse droplets produced by a microfluidic chip through a flow-focusing device. We show that a proper numerical modeling of the flow is necessary to access the stress applied by the latter on the droplet along its trajectory through the chip. This crucial step enables the full integration of the differential equation governing the dynamical deformation, and consequently the robust measurement of the interfacial tension by fitting the experiments with the calculated deformation. Our study thus demonstrates the feasibility of quantitative in situ rheology in microfluidic flows involving, e.g., droplets, capsules, or cells.

  12. Study on quantitative risk assessment model of the third party damage for natural gas pipelines based on fuzzy comprehensive assessment

    International Nuclear Information System (INIS)

    Qiu, Zeyang; Liang, Wei; Lin, Yang; Zhang, Meng; Wang, Xue

    2017-01-01

    As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor. (paper)

  13. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    Science.gov (United States)

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  14. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn [Physics Department, Carleton University, Ottawa, Ontario K1S 5B6, Canada and Cardiology, The University of Ottawa Heart Institute, Ottawa, Ontario K1Y4W7 (Canada)

    2016-01-15

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but their level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE

  15. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions

    Directory of Open Access Journals (Sweden)

    Frederick S. Barrett

    2017-07-01

    Full Text Available Psilocybin is a classic (serotonergic hallucinogen (“psychedelic” drug that may occasion mystical experiences (characterized by a profound feeling of oneness or unity during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli and quantitative analysis (using signal processing and music-information retrieval methods of 22 of these stimuli yielded a description of peak period music that was characterized by regular

  16. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions

    Science.gov (United States)

    Barrett, Frederick S.; Robbins, Hollis; Smooke, David; Brown, Jenine L.; Griffiths, Roland R.

    2017-01-01

    Psilocybin is a classic (serotonergic) hallucinogen (“psychedelic” drug) that may occasion mystical experiences (characterized by a profound feeling of oneness or unity) during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli) and quantitative analysis (using signal processing and music-information retrieval methods) of 22 of these stimuli yielded a description of peak period music that was characterized by regular, predictable

  17. Qualitative and Quantitative Features of Music Reported to Support Peak Mystical Experiences during Psychedelic Therapy Sessions.

    Science.gov (United States)

    Barrett, Frederick S; Robbins, Hollis; Smooke, David; Brown, Jenine L; Griffiths, Roland R

    2017-01-01

    Psilocybin is a classic (serotonergic) hallucinogen ("psychedelic" drug) that may occasion mystical experiences (characterized by a profound feeling of oneness or unity) during acute effects. Such experiences may have therapeutic value. Research and clinical applications of psychedelics usually include music listening during acute drug effects, based on the expectation that music will provide psychological support during the acute effects of psychedelic drugs, and may even facilitate the occurrence of mystical experiences. However, the features of music chosen to support the different phases of drug effects are not well-specified. As a result, there is currently neither real guidance for the selection of music nor standardization of the music used to support clinical trials with psychedelic drugs across various research groups or therapists. A description of the features of music found to be supportive of mystical experience will allow for the standardization and optimization of the delivery of psychedelic drugs in both research trials and therapeutic contexts. To this end, we conducted an anonymous survey of individuals with extensive experience administering psilocybin or psilocybin-containing mushrooms under research or therapeutic conditions, in order to identify the features of commonly used musical selections that have been found by therapists and research staff to be supportive of mystical experiences within a psilocybin session. Ten respondents yielded 24 unique recommendations of musical stimuli supportive of peak effects with psilocybin, and 24 unique recommendations of musical stimuli supportive of the period leading up to a peak experience. Qualitative analysis (expert rating of musical and music-theoretic features of the recommended stimuli) and quantitative analysis (using signal processing and music-information retrieval methods) of 22 of these stimuli yielded a description of peak period music that was characterized by regular, predictable

  18. The Quantitative Evaluation of Functional Neuroimaging Experiments: Mutual Information Learning Curves

    DEFF Research Database (Denmark)

    Kjems, Ulrik; Hansen, Lars Kai; Anderson, Jon

    2002-01-01

    Learning curves are presented as an unbiased means for evaluating the performance of models for neuroimaging data analysis. The learning curve measures the predictive performance in terms of the generalization or prediction error as a function of the number of independent examples (e.g., subjects......) used to determine the parameters in the model. Cross-validation resampling is used to obtain unbiased estimates of a generic multivariate Gaussian classifier, for training set sizes from 2 to 16 subjects. We apply the framework to four different activation experiments, in this case \\$\\backslash......\\$[/sup 15/ O]water data sets, although the framework is equally valid for multisubject fMRI studies. We demonstrate how the prediction error can be expressed as the mutual information between the scan and the scan label, measured in units of bits. The mutual information learning curve can be used...

  19. Mastering R for quantitative finance

    CERN Document Server

    Berlinger, Edina; Badics, Milán; Banai, Ádám; Daróczi, Gergely; Dömötör, Barbara; Gabler, Gergely; Havran, Dániel; Juhász, Péter; Margitai, István; Márkus, Balázs; Medvegyev, Péter; Molnár, Julia; Szucs, Balázs Árpád; Tuza, Ágnes; Vadász, Tamás; Váradi, Kata; Vidovics-Dancs, Ágnes

    2015-01-01

    This book is intended for those who want to learn how to use R's capabilities to build models in quantitative finance at a more advanced level. If you wish to perfectly take up the rhythm of the chapters, you need to be at an intermediate level in quantitative finance and you also need to have a reasonable knowledge of R.

  20. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  1. In-vivo quantitative measurement

    International Nuclear Information System (INIS)

    Ito, Takashi

    1992-01-01

    So far by positron CT, the quantitative analyses of oxygen consumption rate, blood flow distribution, glucose metabolic rate and so on have been carried out. The largest merit of using the positron CT is the observation and verification of mankind have become easy. Recently, accompanying the rapid development of the mapping tracers for central nervous receptors, the observation of many central nervous receptors by the positron CT has become feasible, and must expectation has been placed on the elucidation of brain functions. The conditions required for in vitro processes cannot be realized in strict sense in vivo. The quantitative measurement of in vivo tracer method is carried out by measuring the accumulation and movement of a tracer after its administration. The movement model of the mapping tracer for central nervous receptors is discussed. The quantitative analysis using a steady movement model, the measurement of dopamine receptors by reference method, the measurement of D 2 receptors using 11C-Racloprode by direct method, and the possibility of measuring dynamics bio-reaction are reported. (K.I.)

  2. Systematic Analysis of Quantitative Logic Model Ensembles Predicts Drug Combination Effects on Cell Signaling Networks

    Science.gov (United States)

    2016-08-27

    bovine serum albumin (BSA) diluted to the amount corresponding to that in the media of the stimulated cells. Phospho-JNK comprises two isoforms whose...information accompanies this paper on the CPT: Pharmacometrics & Systems Pharmacology website (http://www.wileyonlinelibrary.com/psp4) Systematic Analysis of Quantitative Logic Model Morris et al. 553 www.wileyonlinelibrary/psp4

  3. Clinical experience of rehabilitation therapists with chronic diseases: a quantitative approach.

    NARCIS (Netherlands)

    Rijken, P.M.; Dekker, J.

    1998-01-01

    Objectives: To provide an overview of the numbers of patients with selected chronic diseases treated by rehabilitation therapists (physical therapists, occupational therapists, exercise therapists and podiatrists). The study was performed to get quantitative information on the degree to which

  4. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  5. Using sobol sequences for planning computer experiments

    Science.gov (United States)

    Statnikov, I. N.; Firsov, G. I.

    2017-12-01

    Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...

  6. Thermal-hydraulic Experiments for Advanced Physical Model Development

    International Nuclear Information System (INIS)

    Song, Chulhwa

    2012-04-01

    The improvement of prediction models is needed to enhance the safety analysis capability through experimental database of local phenomena. To improve the two-phase interfacial area transport model, the various experiments were carried out with local two-phase interfacial structure test facilities. 2 Χ 2 and 6 Χ 6 rod bundle test facilities were used for the experiment on the droplet behavior. The experiments on the droplet behavior inside a heated rod bundle geometry. The experiments used GIRLS and JICO and CFD analysis were carried out to comprehend the local condensation of steam jet, turbulent jet induced by condensation and the thermal mixing in a pool. In order to develop a model for key phenomena of newly adapted safety system, experiments for boiling inside a pool and condensation in horizontal channel have been performed. An experimental database of the CHF (Critical Heat Flux) and PDO (Post-dryout) was constructed. The mechanism of the heat transfer enhancement by surface modifications in nano-fluid was investigated in boiling mode and rapid quenching mode. The special measurement techniques were developed. They are Double-sensor optical void probe, Optic Rod, PIV technique and UBIM system

  7. Quantitative histological models suggest endothermy in plesiosaurs

    Directory of Open Access Journals (Sweden)

    Corinna V. Fleischle

    2018-06-01

    Full Text Available Background Plesiosaurs are marine reptiles that arose in the Late Triassic and survived to the Late Cretaceous. They have a unique and uniform bauplan and are known for their very long neck and hydrofoil-like flippers. Plesiosaurs are among the most successful vertebrate clades in Earth’s history. Based on bone mass decrease and cosmopolitan distribution, both of which affect lifestyle, indications of parental care, and oxygen isotope analyses, evidence for endothermy in plesiosaurs has accumulated. Recent bone histological investigations also provide evidence of fast growth and elevated metabolic rates. However, quantitative estimations of metabolic rates and bone growth rates in plesiosaurs have not been attempted before. Methods Phylogenetic eigenvector maps is a method for estimating trait values from a predictor variable while taking into account phylogenetic relationships. As predictor variable, this study employs vascular density, measured in bone histological sections of fossil eosauropterygians and extant comparative taxa. We quantified vascular density as primary osteon density, thus, the proportion of vascular area (including lamellar infillings of primary osteons to total bone area. Our response variables are bone growth rate (expressed as local bone apposition rate and resting metabolic rate (RMR. Results Our models reveal bone growth rates and RMRs for plesiosaurs that are in the range of birds, suggesting that plesiosaurs were endotherm. Even for basal eosauropterygians we estimate values in the range of mammals or higher. Discussion Our models are influenced by the availability of comparative data, which are lacking for large marine amniotes, potentially skewing our results. However, our statistically robust inference of fast growth and fast metabolism is in accordance with other evidence for plesiosaurian endothermy. Endothermy may explain the success of plesiosaurs consisting in their survival of the end-Triassic extinction

  8. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    Science.gov (United States)

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...

  9. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  10. Quantitative risk assessment system (QRAS)

    Science.gov (United States)

    Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Mosleh, Ali (Inventor); Chang, Yung-Hsien (Inventor); Swaminathan, Sankaran (Inventor); Groen, Francisco J (Inventor); Tan, Zhibin (Inventor)

    2001-01-01

    A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.

  11. A Quantitative Gas Chromatographic Ethanol Determination.

    Science.gov (United States)

    Leary, James J.

    1983-01-01

    Describes a gas chromatographic experiment for the quantitative determination of volume percent ethanol in water ethanol solutions. Background information, procedures, and typical results are included. Accuracy and precision of results are both on the order of two percent. (JN)

  12. Exploring the consumer decision journey and online shoping experience through an emotional prespective

    OpenAIRE

    Passanisi, Joseph

    2016-01-01

    Traditional consumer decision-making models have long used quantitative research to address a link between emotional and rational behavior. However, little qualitative research has been conducted in the area of online shopping as an end-to-end experience. This study aims to provide a detailed phenomenological account of consumers’ online shopping experience and extend Mckinsey & Companys’s consumer decision journey model from an emotional perspective. Six semi-structured interviews and a focu...

  13. Evaluating Quantitative and Qualitative Traits of Promising Potato Clones and Commercial Cultivars Using the GGE Bi-plot and AMMI Models

    Directory of Open Access Journals (Sweden)

    D HassanPanah

    2014-07-01

    Full Text Available To evaluate quantitative and qualitative characteristics and stability of marketable tuber yield of 13 promising potato clones, along with three commercial cultivars (Agria, Marfona and Lady Rosetta as checks, an experiment was based on a randomized complete block design with four replications was carried out at the Agricultural and Natural Resources Research Station of Ardabil during 2011 and 2012. During the growing period and after harvest, attributes like number of main stem per plant, plant height, tuber number and weight per plant, total and marketable tuber yield, dry matter percentage, baking type, hollow heart, tuber inner ring and discoloration of raw tuber flesh after 24 hours were measured. Combined ANONA for quantitative traits showed that there were significant differences among promising clones as to total and marketable tuber yield, tuber number and weight per plant, plant height, mean tuber weight, number of main stem per plant and dry matter percentage and their interactions with year in total and marketable tuber yield and tuber number and weight per plant. The clones 396151-7, 397008-5, 397015-8, 397008-2 and 994001-4 were found to have higher total and marketable tuber yield, tuber number and weight per plant and mean tuber weight. These clones produced high and mid-uniform tuber, yellow skin color, yellow and white flesh color, oval round and round tuber shape, mid and shallow eyes, with no hollow heart, tuber inner crack and tuber inner ring, mid-late maturity and mid and high dry matter percentage as compared with control and other clones. In this experiment, GGE Bi-plot and AMMI models were found to be proper methods for selection of 397008-2, 397008-5 and 994001-4 as being high marketable and stable yielding clones.

  14. CFD Analysis of a Slug Mixing Experiment Conducted on a VVER-1000 Model

    Directory of Open Access Journals (Sweden)

    F. Moretti

    2009-01-01

    Full Text Available A commercial CFD code was applied, for validation purposes, to the simulation of a slug mixing experiment carried out at OKB “Gidropress” scaled facility in the framework of EC TACIS project R2.02/02: “Development of safety analysis capabilities for VVER-1000 transients involving spatial variations of coolant properties (temperature or boron concentration at core inlet.” Such experimental model reproduces a VVER-1000 nuclear reactor and is aimed at investigating the in-vessel mixing phenomena. The addressed experiment involves the start-up of one of the four reactor coolant pumps (the other three remaining idle, and the presence of a tracer slug on the starting loop, which is thus transported to the reactor pressure vessel where it mixes with the clear water. Such conditions may occur in a boron dilution scenario, hence the relevance of the addressed phenomena for nuclear reactor safety. Both a pretest and a posttest CFD simulations of the mentioned experiment were performed, which differ in the definition of the boundary conditions (based either on nominal quantities or on measured quantities, resp.. The numerical results are qualitatively and quantitatively analyzed and compared against the measured data in terms of space and time tracer distribution at the core inlet. The improvement of the results due to the optimization of the boundary conditions is evidenced, and a quantification of the simulation accuracy is proposed.

  15. Sexual Harassment Prevention Initiatives: Quantitative and Qualitative Approaches

    Science.gov (United States)

    2010-10-28

    Quantitative Approach: The Survey The quantitative approach appears to be the dominant form of mainstream psychological research today , and Gelo et al. (2008...that viewpoint and remark that the characteristics of today ‟s psychological research demonstrate realities that can be replicated through studies...2000). The right of passage? The experiences of female pilots in commercial aviation. Feminism & Psychology, 10, 195-225. Davis, A., & Bremner, G

  16. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    Science.gov (United States)

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  17. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Directory of Open Access Journals (Sweden)

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  18. Utilization of high-accuracy FTICR-MS data in protein quantitation experiments

    Czech Academy of Sciences Publication Activity Database

    Strohalm, Martin; Novák, Petr; Pompach, Petr; Man, Petr; Kavan, Daniel; Witt, M.; Džubák, P.; Hajdúch, M.; Havlíček, Vladimír

    2009-01-01

    Roč. 44, č. 11 (2009), s. 1565-1570 ISSN 1076-5174 R&D Projects: GA MŠk LC07017 Institutional research plan: CEZ:AV0Z50200510 Keywords : mass spectrometry * protein quantitation * workflow Subject RIV: EE - Microbiology, Virology Impact factor: 3.411, year: 2009

  19. Modeling experiments using quantum and Kolmogorov probability

    International Nuclear Information System (INIS)

    Hess, Karl

    2008-01-01

    Criteria are presented that permit a straightforward partition of experiments into sets that can be modeled using both quantum probability and the classical probability framework of Kolmogorov. These new criteria concentrate on the operational aspects of the experiments and lead beyond the commonly appreciated partition by relating experiments to commuting and non-commuting quantum operators as well as non-entangled and entangled wavefunctions. In other words the space of experiments that can be understood using classical probability is larger than usually assumed. This knowledge provides advantages for areas such as nanoscience and engineering or quantum computation.

  20. Statistical methods for quantitative mass spectrometry proteomic experiments with labeling

    Directory of Open Access Journals (Sweden)

    Oberg Ann L

    2012-11-01

    Full Text Available Abstract Mass Spectrometry utilizing labeling allows multiple specimens to be subjected to mass spectrometry simultaneously. As a result, between-experiment variability is reduced. Here we describe use of fundamental concepts of statistical experimental design in the labeling framework in order to minimize variability and avoid biases. We demonstrate how to export data in the format that is most efficient for statistical analysis. We demonstrate how to assess the need for normalization, perform normalization, and check whether it worked. We describe how to build a model explaining the observed values and test for differential protein abundance along with descriptive statistics and measures of reliability of the findings. Concepts are illustrated through the use of three case studies utilizing the iTRAQ 4-plex labeling protocol.

  1. Statistical methods for quantitative mass spectrometry proteomic experiments with labeling.

    Science.gov (United States)

    Oberg, Ann L; Mahoney, Douglas W

    2012-01-01

    Mass Spectrometry utilizing labeling allows multiple specimens to be subjected to mass spectrometry simultaneously. As a result, between-experiment variability is reduced. Here we describe use of fundamental concepts of statistical experimental design in the labeling framework in order to minimize variability and avoid biases. We demonstrate how to export data in the format that is most efficient for statistical analysis. We demonstrate how to assess the need for normalization, perform normalization, and check whether it worked. We describe how to build a model explaining the observed values and test for differential protein abundance along with descriptive statistics and measures of reliability of the findings. Concepts are illustrated through the use of three case studies utilizing the iTRAQ 4-plex labeling protocol.

  2. Strategies for MCMC computation in quantitative genetics

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Ibanez, Noelia; Sorensen, Daniel

    2006-01-01

    Given observations of a trait and a pedigree for a group of animals, the basic model in quantitative genetics is a linear mixed model with genetic random effects. The correlation matrix of the genetic random effects is determined by the pedigree and is typically very highdimensional but with a sp......Given observations of a trait and a pedigree for a group of animals, the basic model in quantitative genetics is a linear mixed model with genetic random effects. The correlation matrix of the genetic random effects is determined by the pedigree and is typically very highdimensional...

  3. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  4. Experiments and Modeling of G-Jitter Fluid Mechanics

    Science.gov (United States)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  5. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    Science.gov (United States)

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  6. Cooling tower plume - model and experiment

    Science.gov (United States)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  7. Cryogenic system operating experience review for fusion applications

    International Nuclear Information System (INIS)

    Cadwallader, L.C.

    1992-01-01

    This report presents a review of cryogenic system operating experiences, from particle accelerator, fusion experiment, space research, and other applications. Safety relevant operating experiences and accident information are discussed. Quantitative order-of-magnitude estimates of cryogenic component failure rates and accident initiating event frequencies are presented for use in risk assessment, reliability, and availability studies. Safety concerns with cryogenic systems are discussed, including ozone formation, effects of spills, and modeling spill behavior. This information should be useful to fusion system designers and safety analysts, such as the team working on the International Thermonuclear Experimental Reactor design

  8. Complementary social science? Quali-quantitative experiments in a Big Data world

    Directory of Open Access Journals (Sweden)

    Anders Blok

    2014-08-01

    Full Text Available The rise of Big Data in the social realm poses significant questions at the intersection of science, technology, and society, including in terms of how new large-scale social databases are currently changing the methods, epistemologies, and politics of social science. In this commentary, we address such epochal (“large-scale” questions by way of a (situated experiment: at the Danish Technical University in Copenhagen, an interdisciplinary group of computer scientists, physicists, economists, sociologists, and anthropologists (including the authors is setting up a large-scale data infrastructure, meant to continually record the digital traces of social relations among an entire freshman class of students ( N  > 1000. At the same time, fieldwork is carried out on friendship (and other relations amongst the same group of students. On this basis, the question we pose is the following: what kind of knowledge is obtained on this social micro-cosmos via the Big (computational, quantitative and Small (embodied, qualitative Data, respectively? How do the two relate? Invoking Bohr’s principle of complementarity as analogy, we hypothesize that social relations, as objects of knowledge, depend crucially on the type of measurement device deployed. At the same time, however, we also expect new interferences and polyphonies to arise at the intersection of Big and Small Data, provided that these are, so to speak, mixed with care. These questions, we stress, are important not only for the future of social science methods but also for the type of societal (self-knowledge that may be expected from new large-scale social databases.

  9. Need for collection of quantitative distribution data for dosimetry and metabolic modeling

    International Nuclear Information System (INIS)

    Lathrop, K.A.

    1976-01-01

    Problems in radiation dose distribution studies in humans are discussed. Data show the effective half-times for 7 Be and 75 Se in the mouse, rat, monkey, dog, and human show no correlation with weight, body surface, or other readily apparent factor that could be used to equate nonhuman and human data. Another problem sometimes encountered in attempting to extrapolate animal data to humans involves equivalent doses of the radiopharmaceutical. A usual human dose for a radiopharmaceutical is 1 ml or 0.017 mg/kg. The same solution injected into a mouse in a convenient volume of 0.1 ml results in a dose of 4 ml/kg or 240 times that received by the human. The effect on whole body retention produced by a dose difference of similar magnitude for selenium in the rat shows the retention is at least twice as great with the smaller amount. With the development of methods for the collection of data throughout the body representing the fractional distribution of radioactivity versus time, not only can more realistic dose estimates be made, but also the tools will be provided for the study of physiological and biochemical interrelationships in the intact subject from which compartmental models may be made which have diagnostic significance. The unique requirement for quantitative biologic data needed for calculation of radiation absorbed doses is the same as the unique scientific contribution that nuclear medicine can make, which is the quantitative in vivo study of physiologic and biochemical processes. The technique involved is not the same as quantitation of a radionuclide image, but is a step beyond

  10. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  11. Quantitative comparison of performance analysis techniques for modular and generic network-on-chip

    Directory of Open Access Journals (Sweden)

    M. C. Neuenhahn

    2009-05-01

    Full Text Available NoC-specific parameters feature a huge impact on performance and implementation costs of NoC. Hence, performance and cost evaluation of these parameter-dependent NoC is crucial in different design-stages but the requirements on performance analysis differ from stage to stage. In an early design-stage an analysis technique featuring reduced complexity and limited accuracy can be applied, whereas in subsequent design-stages more accurate techniques are required.

    In this work several performance analysis techniques at different levels of abstraction are presented and quantitatively compared. These techniques include a static performance analysis using timing-models, a Colored Petri Net-based approach, VHDL- and SystemC-based simulators and an FPGA-based emulator. Conducting NoC-experiments with NoC-sizes from 9 to 36 functional units and various traffic patterns, characteristics of these experiments concerning accuracy, complexity and effort are derived.

    The performance analysis techniques discussed here are quantitatively evaluated and finally assigned to the appropriate design-stages in an automated NoC-design-flow.

  12. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  13. Cooling tower plume - model and experiment

    Directory of Open Access Journals (Sweden)

    Cizek Jan

    2017-01-01

    Full Text Available The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  14. Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology

    DEFF Research Database (Denmark)

    Schoof, Erwin; Erler, Janine

    understanding of molecular processes which are fundamental to tumorigenesis. In Article 1, we propose a novel framework for how cancer mutations can be studied by taking into account their effect at the protein network level. In Article 2, we demonstrate how global, quantitative data on phosphorylation dynamics...... can be generated using MS, and how this can be modeled using a computational framework for deciphering kinase-substrate dynamics. This framework is described in depth in Article 3, and covers the design of KinomeXplorer, which allows the prediction of kinases responsible for modulating observed...... phosphorylation dynamics in a given biological sample. In Chapter III, we move into Integrative Network Biology, where, by combining two fundamental technologies (MS & NGS), we can obtain more in-depth insights into the links between cellular phenotype and genotype. Article 4 describes the proof...

  15. An Evaluation Model of Quantitative and Qualitative Fuzzy Multi-Criteria Decision-Making Approach for Location Selection of Transshipment Ports

    Directory of Open Access Journals (Sweden)

    Ji-Feng Ding

    2013-01-01

    Full Text Available The role of container logistics centre as home bases for merchandise transportation has become increasingly important. The container carriers need to select a suitable centre location of transshipment port to meet the requirements of container shipping logistics. In the light of this, the main purpose of this paper is to develop a fuzzy multi-criteria decision-making (MCDM model to evaluate the best selection of transshipment ports for container carriers. At first, some concepts and methods used to develop the proposed model are briefly introduced. The performance values of quantitative and qualitative subcriteria are discussed to evaluate the fuzzy ratings. Then, the ideal and anti-ideal concepts and the modified distance measure method are used in the proposed model. Finally, a step-by-step example is illustrated to study the computational process of the quantitative and qualitative fuzzy MCDM model. The proposed approach has successfully accomplished our goal. In addition, the proposed fuzzy MCDM model can be empirically employed to select the best location of transshipment port for container carriers in the future study.

  16. Positional stability experiment and analysis of elongated plasmas in Doublet III

    International Nuclear Information System (INIS)

    Yokomizo, Hideaki

    1984-04-01

    Control systems of the plasma position and shape on Doublet III are explained and experimental results of vertical stability of elongated plasmas are reviewed. Observed results of the vertical instability are qualitatively compared with the predictions from the simplified model and quantitatively compared with the numerical calculations based on a more realistic model. Experiments are in reasonable agreement with the theoretical analyses. (author)

  17. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    Science.gov (United States)

    Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W

    2016-11-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  18. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    Science.gov (United States)

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  19. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    Directory of Open Access Journals (Sweden)

    Sho Manabe

    Full Text Available In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  20. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  1. Quantitative structure–activity relationship model for amino acids as corrosion inhibitors based on the support vector machine and molecular design

    International Nuclear Information System (INIS)

    Zhao, Hongxia; Zhang, Xiuhui; Ji, Lin; Hu, Haixiang; Li, Qianshu

    2014-01-01

    Highlights: • Nonlinear quantitative structure–activity relationship (QSAR) model was built by the support vector machine. • Descriptors for QSAR model were selected by principal component analysis. • Binding energy was taken as one of the descriptors for QSAR model. • Acidic solution and protonation of the inhibitor were considered. - Abstract: The inhibition performance of nineteen amino acids was studied by theoretical methods. The affection of acidic solution and protonation of inhibitor were considered in molecular dynamics simulation and the results indicated that the protonated amino-group was not adsorbed on Fe (1 1 0) surface. Additionally, a nonlinear quantitative structure–activity relationship (QSAR) model was built by the support vector machine. The correlation coefficient was 0.97 and the root mean square error, the differences between predicted and experimental inhibition efficiencies (%), was 1.48. Furthermore, five new amino acids were theoretically designed and their inhibition efficiencies were predicted by the built QSAR model

  2. Dynamic inundation mapping of Hurricane Harvey flooding in the Houston metro area using hyper-resolution modeling and quantitative image reanalysis

    Science.gov (United States)

    Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.

    2017-12-01

    Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.

  3. Topographic evolution of sandbars: Flume experiment and computational modeling

    Science.gov (United States)

    Kinzel, Paul J.; Nelson, Jonathan M.; McDonald, Richard R.; Logan, Brandy L.

    2010-01-01

    Measurements of sandbar formation and evolution were carried out in a laboratory flume and the topographic characteristics of these barforms were compared to predictions from a computational flow and sediment transport model with bed evolution. The flume experiment produced sandbars with approximate mode 2, whereas numerical simulations produced a bed morphology better approximated as alternate bars, mode 1. In addition, bar formation occurred more rapidly in the laboratory channel than for the model channel. This paper focuses on a steady-flow laboratory experiment without upstream sediment supply. Future experiments will examine the effects of unsteady flow and sediment supply and the use of numerical models to simulate the response of barform topography to these influences.

  4. Defect evolution in cosmology and condensed matter quantitative analysis with the velocity-dependent one-scale model

    CERN Document Server

    Martins, C J A P

    2016-01-01

    This book sheds new light on topological defects in widely differing systems, using the Velocity-Dependent One-Scale Model to better understand their evolution. Topological defects – cosmic strings, monopoles, domain walls or others - necessarily form at cosmological (and condensed matter) phase transitions. If they are stable and long-lived they will be fossil relics of higher-energy physics. Understanding their behaviour and consequences is a key part of any serious attempt to understand the universe, and this requires modelling their evolution. The velocity-dependent one-scale model is the only fully quantitative model of defect network evolution, and the canonical model in the field. This book provides a review of the model, explaining its physical content and describing its broad range of applicability.

  5. Origins of Discrepancies Between Kinetic Rate Law Theory and Experiments in the Na2O-B2O3-SiO2 System

    International Nuclear Information System (INIS)

    McGrail, B. PETER; Icenhower, Jonathan P.; Rodriguez, Elsa A.; McGrail, B.P.; Cragnolino, G.A.

    2002-01-01

    Discrepancies between classical kinetic rate law theory and experiment were quantitatively assessed and found to correlate with macromolecular amorphous separation in the sodium borosilicate glass system. A quantitative reinterpretation of static corrosion data and new SPFT data shows that a recently advanced protective surface layer theory fails to describe the observed dissolution behavior of simple and complex silicate glasses under carefully controlled experimental conditions. The hypothesis is shown to be self-inconsistent in contrast with a phase separation model that is in quantitative agreement with experiments

  6. Multivariate characterisation and quantitative structure-property relationship modelling of nitroaromatic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, S. [Man-Technology-Environment Research Centre, Department of Natural Sciences, Orebro University, 701 82 Orebro (Sweden)], E-mail: sofie.jonsson@nat.oru.se; Eriksson, L.A. [Department of Natural Sciences and Orebro Life Science Center, Orebro University, 701 82 Orebro (Sweden); Bavel, B. van [Man-Technology-Environment Research Centre, Department of Natural Sciences, Orebro University, 701 82 Orebro (Sweden)

    2008-07-28

    A multivariate model to characterise nitroaromatics and related compounds based on molecular descriptors was calculated. Descriptors were collected from literature and through empirical, semi-empirical and density functional theory-based calculations. Principal components were used to describe the distribution of the compounds in a multidimensional space. Four components described 76% of the variation in the dataset. PC1 separated the compounds due to molecular weight, PC2 separated the different isomers, PC3 arranged the compounds according to different functional groups such as nitrobenzoic acids, nitrobenzenes, nitrotoluenes and nitroesters and PC4 differentiated the compounds containing chlorine from other compounds. Quantitative structure-property relationship models were calculated using partial least squares (PLS) projection to latent structures to predict gas chromatographic (GC) retention times and the distribution between the water phase and air using solid-phase microextraction (SPME). GC retention time was found to be dependent on the presence of polar amine groups, electronic descriptors including highest occupied molecular orbital, dipole moments and the melting point. The model of GC retention time was good, but the precision was not precise enough for practical use. An important environmental parameter was measured using SPME, the distribution between headspace (air) and the water phase. This parameter was mainly dependent on Henry's law constant, vapour pressure, log P, content of hydroxyl groups and atmospheric OH rate constant. The predictive capacity of the model substantially improved when recalculating a model using these five descriptors only.

  7. Quantitative Circulatory Physiology: an integrative mathematical model of human physiology for medical education.

    Science.gov (United States)

    Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L

    2007-06-01

    We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.

  8. Understanding Coupled Earth-Surface Processes through Experiments and Models (Invited)

    Science.gov (United States)

    Overeem, I.; Kim, W.

    2013-12-01

    Traditionally, both numerical models and experiments have been purposefully designed to ';isolate' singular components or certain processes of a larger mountain to deep-ocean interconnected source-to-sink (S2S) transport system. Controlling factors driven by processes outside of the domain of immediate interest were treated and simplified as input or as boundary conditions. Increasingly, earth surface processes scientists appreciate feedbacks and explore these feedbacks with more dynamically coupled approaches to their experiments and models. Here, we discuss key concepts and recent advances made in coupled modeling and experimental setups. In addition, we emphasize challenges and new frontiers to coupled experiments. Experiments have highlighted the important role of self-organization; river and delta systems do not always need to be forced by external processes to change or develop characteristic morphologies. Similarly modeling f.e. has shown that intricate networks in tidal deltas are stable because of the interplay between river avulsions and the tidal current scouring with both processes being important to develop and maintain the dentritic networks. Both models and experiment have demonstrated that seemingly stable systems can be perturbed slightly and show dramatic responses. Source-to-sink models were developed for both the Fly River System in Papua New Guinea and the Waipaoa River in New Zealand. These models pointed to the importance of upstream-downstream effects and enforced our view of the S2S system as a signal transfer and dampening conveyor belt. Coupled modeling showed that deforestation had extreme effects on sediment fluxes draining from the catchment of the Waipaoa River in New Zealand, and that this increase in sediment production rapidly shifted the locus of offshore deposition. The challenge in designing coupled models and experiments is both technological as well as intellectual. Our community advances to make numerical model coupling more

  9. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling.

    Directory of Open Access Journals (Sweden)

    Lorenza A D'Alessandro

    2015-04-01

    Full Text Available Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF stimulated phosphoinositide-3-kinase (PI3K and mitogen activated protein kinase (MAPK signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks.

  10. Data from the Hot Serial Cereal Experiment for modeling wheat response to temperature: field experiments and AgMIP-Wheat multi-model simulations

    NARCIS (Netherlands)

    Martre, Pierre; Kimball, Bruce A.; Ottman, Michael J.; Wall, Gerard W.; White, Jeffrey W.; Asseng, Senthold; Ewert, Frank; Cammarano, Davide; Maiorano, Andrea; Aggarwal, Pramod K.; Supit, I.; Wolf, J.

    2018-01-01

    The dataset reported here includes the part of a Hot Serial Cereal Experiment (HSC) experiment recently used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat models and quantify their response to temperature. The HSC experiment was conducted in an open-field in a semiarid

  11. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges.

    Directory of Open Access Journals (Sweden)

    Marielle Ernst

    Full Text Available We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists.We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model. Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed.In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling.Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.

  12. Dose response models and a quantitative microbial risk assessment framework for the Mycobacterium avium complex that account for recent developments in molecular biology, taxonomy, and epidemiology.

    Science.gov (United States)

    Hamilton, Kerry A; Weir, Mark H; Haas, Charles N

    2017-02-01

    Mycobacterium avium complex (MAC) is a group of environmentally-transmitted pathogens of great public health importance. This group is known to be harbored, amplified, and selected for more human-virulent characteristics by amoeba species in aquatic biofilms. However, a quantitative microbial risk assessment (QMRA) has not been performed due to the lack of dose response models resulting from significant heterogeneity within even a single species or subspecies of MAC, as well as the range of human susceptibilities to mycobacterial disease. The primary human-relevant species and subspecies responsible for the majority of the human disease burden and present in drinking water, biofilms, and soil are M. avium subsp. hominissuis, M. intracellulare, and M. chimaera. A critical review of the published literature identified important health endpoints, exposure routes, and susceptible populations for MAC risk assessment. In addition, data sets for quantitative dose-response functions were extracted from published in vivo animal dosing experiments. As a result, seven new exponential dose response models for human-relevant species of MAC with endpoints of lung lesions, death, disseminated infection, liver infection, and lymph node lesions are proposed. Although current physical and biochemical tests used in clinical settings do not differentiate between M. avium and M. intracellulare, differentiating between environmental species and subspecies of the MAC can aid in the assessment of health risks and control of MAC sources. A framework is proposed for incorporating the proposed dose response models into susceptible population- and exposure route-specific QMRA models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system.

    Science.gov (United States)

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.

  14. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system

    Directory of Open Access Journals (Sweden)

    Annie eLumen

    2015-05-01

    Full Text Available A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local

  15. Position dependent mismatch discrimination on DNA microarrays – experiments and model

    Directory of Open Access Journals (Sweden)

    Michel Wolfgang

    2008-12-01

    Full Text Available Abstract Background The propensity of oligonucleotide strands to form stable duplexes with complementary sequences is fundamental to a variety of biological and biotechnological processes as various as microRNA signalling, microarray hybridization and PCR. Yet our understanding of oligonucleotide hybridization, in particular in presence of surfaces, is rather limited. Here we use oligonucleotide microarrays made in-house by optically controlled DNA synthesis to produce probe sets comprising all possible single base mismatches and base bulges for each of 20 sequence motifs under study. Results We observe that mismatch discrimination is mostly determined by the defect position (relative to the duplex ends as well as by the sequence context. We investigate the thermodynamics of the oligonucleotide duplexes on the basis of double-ended molecular zipper. Theoretical predictions of defect positional influence as well as long range sequence influence agree well with the experimental results. Conclusion Molecular zipping at thermodynamic equilibrium explains the binding affinity of mismatched DNA duplexes on microarrays well. The position dependent nearest neighbor model (PDNN can be inferred from it. Quantitative understanding of microarray experiments from first principles is in reach.

  16. Impact Assessment of Abiotic Resources in LCA: Quantitative Comparison of Selected Characterization Models

    DEFF Research Database (Denmark)

    Rørbech, Jakob Thaysen; Vadenbo, Carl; Hellweg, Stefanie

    2014-01-01

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment...... results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247...... groups, according to method focus and modeling approach, to aid method selection within LCA....

  17. Experiments and Modeling of Si-Ge Interdiffusion with Partial Strain Relaxation in Epitaxial SiGe Heterostructures

    KAUST Repository

    Dong, Y.

    2014-07-26

    Si-Ge interdiffusion and strain relaxation were studied in a metastable SiGe epitaxial structure. With Ge concentration profiling and ex-situ strain analysis, it was shown that during thermal anneals, both Si-Ge interdiffusion and strain relaxation occurred. Furthermore, the time evolutions of both strain relaxation and interdiffusion were characterized. It showed that during the ramp-up stage of thermal anneals at higher temperatures (800°C and 840°C), the degree of relaxation, R, reached a “plateau”, while interdiffusion was negligible. With the approximation that the R value is constant after the ramp-up stage, a quantitative interdiffusivity model was built to account for both the effect of strain relaxation and the impact of the relaxation induced dislocations, which gave good agreement with the experiment data.

  18. Mathematical Model of Nicholson’s Experiment

    Directory of Open Access Journals (Sweden)

    Sergey D. Glyzin

    2017-01-01

    Full Text Available Considered  is a mathematical model of insects  population dynamics,  and  an attempt is made  to explain  classical experimental results  of Nicholson with  its help.  In the  first section  of the paper  Nicholson’s experiment is described  and dynamic  equations  for its modeling are chosen.  A priori estimates  for model parameters can be made more precise by means of local analysis  of the  dynamical system,  that is carried  out in the second section.  For parameter values found there  the stability loss of the  problem  equilibrium  of the  leads to the  bifurcation of a stable  two-dimensional torus.   Numerical simulations  based  on the  estimates  from the  second section  allows to explain  the  classical Nicholson’s experiment, whose detailed  theoretical substantiation is given in the last section.  There for an atrractor of the  system  the  largest  Lyapunov  exponent is computed. The  nature of this  exponent change allows to additionally narrow  the area of model parameters search.  Justification of this experiment was made possible  only  due  to  the  combination of analytical and  numerical  methods  in studying  equations  of insects  population dynamics.   At the  same time,  the  analytical approach made  it possible to perform numerical  analysis  in a rather narrow  region of the  parameter space.  It is not  possible to get into this area,  based only on general considerations.

  19. Combination and Integration of Qualitative and Quantitative Analysis

    Directory of Open Access Journals (Sweden)

    Philipp Mayring

    2001-02-01

    Full Text Available In this paper, I am going to outline ways of combining qualitative and quantitative steps of analysis on five levels. On the technical level, programs for the computer-aided analysis of qualitative data offer various combinations. Where the data are concerned, the employment of categories (for instance by using qualitative content analysis allows for combining qualitative and quantitative forms of data analysis. On the individual level, the creation of types and the inductive generalisation of cases allow for proceeding from individual case material to quantitative generalisations. As for research design, different models can be distinguished (preliminary study, generalisation, elaboration, triangulation which combine qualitative and quantitative steps of analysis. Where the logic of research is concerned, it can be shown that an extended process model which combined qualitative and quantitative research can be appropriate and thus lead to an integration of the two approaches. URN: urn:nbn:de:0114-fqs010162

  20. Transcriptome discovery in non-model wild fish species for the development of quantitative transcript abundance assays

    Science.gov (United States)

    Hahn, Cassidy M.; Iwanowicz, Luke R.; Cornman, Robert S.; Mazik, Patricia M.; Blazer, Vicki S.

    2016-01-01

    Environmental studies increasingly identify the presence of both contaminants of emerging concern (CECs) and legacy contaminants in aquatic environments; however, the biological effects of these compounds on resident fishes remain largely unknown. High throughput methodologies were employed to establish partial transcriptomes for three wild-caught, non-model fish species; smallmouth bass (Micropterus dolomieu), white sucker (Catostomus commersonii) and brown bullhead (Ameiurus nebulosus). Sequences from these transcriptome databases were utilized in the development of a custom nCounter CodeSet that allowed for direct multiplexed measurement of 50 transcript abundance endpoints in liver tissue. Sequence information was also utilized in the development of quantitative real-time PCR (qPCR) primers. Cross-species hybridization allowed the smallmouth bass nCounter CodeSet to be used for quantitative transcript abundance analysis of an additional non-model species, largemouth bass (Micropterus salmoides). We validated the nCounter analysis data system with qPCR for a subset of genes and confirmed concordant results. Changes in transcript abundance biomarkers between sexes and seasons were evaluated to provide baseline data on transcript modulation for each species of interest.

  1. Quantitative assessment of manual and robotic microcannulation for eye surgery using new eye model.

    Science.gov (United States)

    Tanaka, Shinichi; Harada, Kanako; Ida, Yoshiki; Tomita, Kyohei; Kato, Ippei; Arai, Fumihito; Ueta, Takashi; Noda, Yasuo; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-06-01

    Microcannulation, a surgical procedure for the eye that requires drug injection into a 60-90 µm retinal vein, is difficult to perform manually. Robotic assistance has been proposed; however, its effectiveness in comparison to manual operation has not been quantified. An eye model has been developed to quantify the performance of manual and robotic microcannulation. The eye model, which is implemented with a force sensor and microchannels, also simulates the mechanical constraints of the instrument's movement. Ten subjects performed microcannulation using the model, with and without robotic assistance. The results showed that the robotic assistance was useful for motion stability when the drug was injected, whereas its positioning accuracy offered no advantage. An eye model was used to quantitatively assess the robotic microcannulation performance in comparison to manual operation. This approach could be valid for a better evaluation of surgical robotic assistance. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Novel Uses of In Vitro Data to Develop Quantitative Biological Activity Relationship Models for in Vivo Carcinogenicity Prediction.

    Science.gov (United States)

    Pradeep, Prachi; Povinelli, Richard J; Merrill, Stephen J; Bozdag, Serdar; Sem, Daniel S

    2015-04-01

    The availability of large in vitro datasets enables better insight into the mode of action of chemicals and better identification of potential mechanism(s) of toxicity. Several studies have shown that not all in vitro assays can contribute as equal predictors of in vivo carcinogenicity for development of hybrid Quantitative Structure Activity Relationship (QSAR) models. We propose two novel approaches for the use of mechanistically relevant in vitro assay data in the identification of relevant biological descriptors and development of Quantitative Biological Activity Relationship (QBAR) models for carcinogenicity prediction. We demonstrate that in vitro assay data can be used to develop QBAR models for in vivo carcinogenicity prediction via two case studies corroborated with firm scientific rationale. The case studies demonstrate the similarities between QBAR and QSAR modeling in: (i) the selection of relevant descriptors to be used in the machine learning algorithm, and (ii) the development of a computational model that maps chemical or biological descriptors to a toxic endpoint. The results of both the case studies show: (i) improved accuracy and sensitivity which is especially desirable under regulatory requirements, and (ii) overall adherence with the OECD/REACH guidelines. Such mechanism based models can be used along with QSAR models for prediction of mechanistically complex toxic endpoints. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Quantitative spectral and orientational analysis in surface sum frequency generation vibrational spectroscopy (SFG-VS)

    Science.gov (United States)

    Wang, Hong-Fei; Gan, Wei; Lu, Rong; Rao, Yi; Wu, Bao-Hua

    Sum frequency generation vibrational spectroscopy (SFG-VS) has been proven to be a uniquely effective spectroscopic technique in the investigation of molecular structure and conformations, as well as the dynamics of molecular interfaces. However, the ability to apply SFG-VS to complex molecular interfaces has been limited by the ability to abstract quantitative information from SFG-VS experiments. In this review, we try to make assessments of the limitations, issues and techniques as well as methodologies in quantitative orientational and spectral analysis with SFG-VS. Based on these assessments, we also try to summarize recent developments in methodologies on quantitative orientational and spectral analysis in SFG-VS, and their applications to detailed analysis of SFG-VS data of various vapour/neat liquid interfaces. A rigorous formulation of the polarization null angle (PNA) method is given for accurate determination of the orientational parameter D = /, and comparison between the PNA method with the commonly used polarization intensity ratio (PIR) method is discussed. The polarization and incident angle dependencies of the SFG-VS intensity are also reviewed, in the light of how experimental arrangements can be optimized to effectively abstract crucial information from the SFG-VS experiments. The values and models of the local field factors in the molecular layers are discussed. In order to examine the validity and limitations of the bond polarizability derivative model, the general expressions for molecular hyperpolarizability tensors and their expression with the bond polarizability derivative model for C3v, C2v and C∞v molecular groups are given in the two appendixes. We show that the bond polarizability derivative model can quantitatively describe many aspects of the intensities observed in the SFG-VS spectrum of the vapour/neat liquid interfaces in different polarizations. Using the polarization analysis in SFG-VS, polarization selection rules or

  4. Integrated multiscale biomaterials experiment and modelling: a perspective

    Science.gov (United States)

    Buehler, Markus J.; Genin, Guy M.

    2016-01-01

    Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126

  5. Quantitative structure-activity relationship (QSAR) models for polycyclic aromatic hydrocarbons (PAHs) dissipation in rhizosphere based on molecular structure and effect size

    International Nuclear Information System (INIS)

    Ma Bin; Chen Huaihai; Xu Minmin; Hayat, Tahir; He Yan; Xu Jianming

    2010-01-01

    Rhizoremediation is a significant form of bioremediation for polycyclic aromatic hydrocarbons (PAHs). This study examined the role of molecular structure in determining the rhizosphere effect on PAHs dissipation. Effect size in meta-analysis was employed as activity dataset for building quantitative structure-activity relationship (QSAR) models and accumulative effect sizes of 16 PAHs were used for validation of these models. Based on the genetic algorithm combined with partial least square regression, models for comprehensive dataset, Poaceae dataset, and Fabaceae dataset were built. The results showed that information indices, calculated as information content of molecules based on the calculation of equivalence classes from the molecular graph, were the most important molecular structural indices for QSAR models of rhizosphere effect on PAHs dissipation. The QSAR model, based on the molecular structure indices and effect size, has potential to be used in studying and predicting the rhizosphere effect of PAHs dissipation. - Effect size based on meta-analysis was used for building PAHs dissipation quantitative structure-activity relationship (QSAR) models.

  6. Quantitative structure-activity relationship (QSAR) models for polycyclic aromatic hydrocarbons (PAHs) dissipation in rhizosphere based on molecular structure and effect size

    Energy Technology Data Exchange (ETDEWEB)

    Ma Bin; Chen Huaihai; Xu Minmin; Hayat, Tahir [Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, College of Environmental and Natural Resource Sciences, Zhejiang University, Hangzhou 310029 (China); He Yan, E-mail: yhe2006@zju.edu.c [Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, College of Environmental and Natural Resource Sciences, Zhejiang University, Hangzhou 310029 (China); Xu Jianming, E-mail: jmxu@zju.edu.c [Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, College of Environmental and Natural Resource Sciences, Zhejiang University, Hangzhou 310029 (China)

    2010-08-15

    Rhizoremediation is a significant form of bioremediation for polycyclic aromatic hydrocarbons (PAHs). This study examined the role of molecular structure in determining the rhizosphere effect on PAHs dissipation. Effect size in meta-analysis was employed as activity dataset for building quantitative structure-activity relationship (QSAR) models and accumulative effect sizes of 16 PAHs were used for validation of these models. Based on the genetic algorithm combined with partial least square regression, models for comprehensive dataset, Poaceae dataset, and Fabaceae dataset were built. The results showed that information indices, calculated as information content of molecules based on the calculation of equivalence classes from the molecular graph, were the most important molecular structural indices for QSAR models of rhizosphere effect on PAHs dissipation. The QSAR model, based on the molecular structure indices and effect size, has potential to be used in studying and predicting the rhizosphere effect of PAHs dissipation. - Effect size based on meta-analysis was used for building PAHs dissipation quantitative structure-activity relationship (QSAR) models.

  7. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  8. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    Science.gov (United States)

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  9. Hazard Response Modeling Uncertainty (A Quantitative Method). Volume 2. Evaluation of Commonly Used Hazardous Gas Dispersion Models

    Science.gov (United States)

    1993-03-01

    the HDA . The model will 89 explicitly account for initial dilution, aerosol evaporation, and entrainment for turbulent jets, which simplifies...D.N., Yohn, J.F., Koopman R.P. and Brown T.C., "Conduct of Anhydrous Hydrofluoric Acid Spill Experiments," Proc. Int. Cqnf. On Vapor Cloud Modeling

  10. PIQMIe: A web server for semi-quantitative proteomics data management and analysis

    NARCIS (Netherlands)

    A. Kuzniar (Arnold); R. Kanaar (Roland)

    2014-01-01

    textabstractWe present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates

  11. The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience

    Directory of Open Access Journals (Sweden)

    Maria Kon

    2015-05-01

    Full Text Available Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities. I show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition. Here I discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the mere succession of experiences. I question the presupposition that there is such an evident, clear distinction and suggest that, instead, how the distinction is drawn is context-dependent. After suggesting a way to modify the philosophical models of temporal experience to accommodate this context-dependency, I illustrate that these models can fruitfully incorporate features of research projects in embodied musical cognition. To do so I supplement a modified retentionalist model with aspects of recent work that links bodily movement with musical perception (Godøy, 2006; 2010a; Jensenius, Wanderley, Godøy, and Leman, 2010. The resulting model is shown to facilitate novel hypotheses, refine the notion of context-dependency and point towards means of extending the philosophical model and an existent research program.

  12. Modelling the Grimsel migration field experiments at PSI

    International Nuclear Information System (INIS)

    Heer, W.

    1997-01-01

    For several years tracer migration experiments have been performed at Nagra's Grimsel Test Site as a joint undertaking of Nagra, PNC and PSI. The aims of modelling the migration experiments are (1) to better understand the nuclide transport through crystalline rock; (2) to gain information on validity of methods and correlating parameters; (3) to improve models for safety assessments. The PSI modelling results, presented here, show a consistent picture for the investigated tracers (the non-sorbing uranine, the weakly sorbing sodium, the moderately sorbing strontium and the more strongly sorbing cesium). They represent an important step in building up confidence in safety assessments for radioactive waste repositories. (author) 5 figs., 1 tab., 12 refs

  13. Using ISOS consensus test protocols for development of quantitative life test models in ageing of organic solar cells

    DEFF Research Database (Denmark)

    Kettle, J.; Stoichkov, V.; Kumar, D.

    2017-01-01

    As Organic Photovoltaic (OPV) development matures, the demand grows for rapid characterisation of degradation and application of Quantitative Accelerated Life Tests (QALT) models to predict and improve reliability. To date, most accelerated testing on OPVs has been conducted using ISOS consensus...

  14. Quantitative evaluation of the risk induced by dominant geomorphological processes on different land uses, based on GIS spatial analysis models

    Science.gov (United States)

    Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea

    2017-12-01

    Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.

  15. Quantitative evaluation of the risk induced by dominant geomorphological processes on different land uses, based on GIS spatial analysis models

    Science.gov (United States)

    Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea

    2018-06-01

    Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.

  16. Dynamic modeling of renal blood flow in Dahl hypertensive and normotensive rats

    DEFF Research Database (Denmark)

    Knudsen, Torben; Elmer, Henrik; Knudsen, Morten H

    2004-01-01

    A method is proposed in this paper which allows characterization of renal autoregulatory dynamics and efficiency using quantitative mathematical methods. Based on data from rat experiments, where arterial blood pressure and renal blood flow are measured, a quantitative model for renal blood flow...

  17. Methane emissions from rice paddies. Experiments and modelling

    International Nuclear Information System (INIS)

    Van Bodegom, P.M.

    2000-01-01

    This thesis describes model development and experimentation on the comprehension and prediction of methane (CH4) emissions from rice paddies. The large spatial and temporal variability in CH4 emissions and the dynamic non-linear relationships between processes underlying CH4 emissions impairs the applicability of empirical relations. Mechanistic concepts are therefore starting point of analysis throughout the thesis. The process of CH4 production was investigated by soil slurry incubation experiments at different temperatures and with additions of different electron donors and acceptors. Temperature influenced conversion rates and the competitiveness of microorganisms. The experiments were used to calibrate and validate a mechanistic model on CH4 production that describes competition for acetate and H2/CO2, inhibition effects and chemolithotrophic reactions. The redox sequence leading eventually to CH4 production was well predicted by the model, calibrating only the maximum conversion rates. Gas transport through paddy soil and rice plants was quantified by experiments in which the transport of SF6 was monitored continuously by photoacoustics. A mechanistic model on gas transport in a flooded rice system based on diffusion equations was validated by these experiments and could explain why most gases are released via plant mediated transport. Variability in root distribution led to highly variable gas transport. Experiments showed that CH4 oxidation in the rice rhizosphere was oxygen (O2) limited. Rice rhizospheric O2 consumption was dominated by chemical iron oxidation, and heterotrophic and methanotrophic respiration. The most abundant methanotrophs and heterotrophs were isolated and kinetically characterised. Based upon these experiments it was hypothesised that CH4 oxidation mainly occurred at microaerophilic, low acetate conditions not very close to the root surface. A mechanistic rhizosphere model that combined production and consumption of O2, carbon and iron

  18. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...

  19. Prognostic Value of Quantitative Stress Perfusion Cardiac Magnetic Resonance.

    Science.gov (United States)

    Sammut, Eva C; Villa, Adriana D M; Di Giovine, Gabriella; Dancy, Luke; Bosio, Filippo; Gibbs, Thomas; Jeyabraba, Swarna; Schwenke, Susanne; Williams, Steven E; Marber, Michael; Alfakih, Khaled; Ismail, Tevfik F; Razavi, Reza; Chiribiri, Amedeo

    2018-05-01

    This study sought to evaluate the prognostic usefulness of visual and quantitative perfusion cardiac magnetic resonance (CMR) ischemic burden in an unselected group of patients and to assess the validity of consensus-based ischemic burden thresholds extrapolated from nuclear studies. There are limited data on the prognostic value of assessing myocardial ischemic burden by CMR, and there are none using quantitative perfusion analysis. Patients with suspected coronary artery disease referred for adenosine-stress perfusion CMR were included (n = 395; 70% male; age 58 ± 13 years). The primary endpoint was a composite of cardiovascular death, nonfatal myocardial infarction, aborted sudden death, and revascularization after 90 days. Perfusion scans were assessed visually and with quantitative analysis. Cross-validated Cox regression analysis and net reclassification improvement were used to assess the incremental prognostic value of visual or quantitative perfusion analysis over a baseline clinical model, initially as continuous covariates, then using accepted thresholds of ≥2 segments or ≥10% myocardium. After a median 460 days (interquartile range: 190 to 869 days) follow-up, 52 patients reached the primary endpoint. At 2 years, the addition of ischemic burden was found to increase prognostic value over a baseline model of age, sex, and late gadolinium enhancement (baseline model area under the curve [AUC]: 0.75; visual AUC: 0.84; quantitative AUC: 0.85). Dichotomized quantitative ischemic burden performed better than visual assessment (net reclassification improvement 0.043 vs. 0.003 against baseline model). This study was the first to address the prognostic benefit of quantitative analysis of perfusion CMR and to support the use of consensus-based ischemic burden thresholds by perfusion CMR for prognostic evaluation of patients with suspected coronary artery disease. Quantitative analysis provided incremental prognostic value to visual assessment and

  20. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  1. Quantitative traits and diversification.

    Science.gov (United States)

    FitzJohn, Richard G

    2010-12-01

    Quantitative traits have long been hypothesized to affect speciation and extinction rates. For example, smaller body size or increased specialization may be associated with increased rates of diversification. Here, I present a phylogenetic likelihood-based method (quantitative state speciation and extinction [QuaSSE]) that can be used to test such hypotheses using extant character distributions. This approach assumes that diversification follows a birth-death process where speciation and extinction rates may vary with one or more traits that evolve under a diffusion model. Speciation and extinction rates may be arbitrary functions of the character state, allowing much flexibility in testing models of trait-dependent diversification. I test the approach using simulated phylogenies and show that a known relationship between speciation and a quantitative character could be recovered in up to 80% of the cases on large trees (500 species). Consistent with other approaches, detecting shifts in diversification due to differences in extinction rates was harder than when due to differences in speciation rates. Finally, I demonstrate the application of QuaSSE to investigate the correlation between body size and diversification in primates, concluding that clade-specific differences in diversification may be more important than size-dependent diversification in shaping the patterns of diversity within this group.

  2. Quantitative self-assembly prediction yields targeted nanomedicines

    Science.gov (United States)

    Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.

    2018-02-01

    Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.

  3. Parts of the Whole: Strategies for the Spread of Quantitative Literacy: What Models Can Tell Us

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2014-07-01

    Full Text Available Two conceptual frameworks, one from graph theory and one from dynamical systems, have been offered as explanations for complex phenomena in biology and also as possible models for the spread of ideas. The two models are based on different assumptions and thus predict quite different outcomes for the fate of either biological species or ideas. We argue that, depending on the culture in which they exist, one can identify which model is more likely to reflect the survival of two competing ideas. Based on this argument we suggest how two strategies for embedding and normalizing quantitative literacy in a given institution are likely to succeed or fail.

  4. Cognitive Modeling of Video Game Player User Experience

    Science.gov (United States)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  5. Modeling, analysis and experiments for fusion nuclear technology

    International Nuclear Information System (INIS)

    Abdou, M.A.; Hadid, A.H.; Raffray, A.R.; Tillack, M.S.; Iizuka, T.

    1988-01-01

    Selected issues in the development of fusion nuclear technology (FNT) have been studied. These relate to (1) near-term experiments, modeling, and analysis for several key FNT issues, and (2) FNT testing in future fusion facilities. A key concern for solid breeder blankets is to reduce the number of candidate materials and configurations for advanced experiments to emphasize those with the highest potential. Based on technical analysis, recommendations have been developed for reducing the size of the test matrix and for focusing the testing program on important areas of emphasis. The characteristics of an advanced liquid metal MHD experiment have also been studied. This facility is required in addition to existing facilities in order to address critical uncertainties in MHD fluid flow and heat transfer. In addition to experiments, successful development of FNT will require models for interpreting experimental data, for planning experiments, and for use as a design tool for fusion components. Modeling of liquid metal fluid flows is a particular area of need in which substantial progress is expected, and initial efforts are reported here. Preliminary results on the modeling of tritium transport and inventory in solid breeders are also summarized. Finally, the thermo-mechanical behavior of liquid-metal-cooled limiters is analyzed and the parameter space for feasible designs is explored. Because of the renewed strong interest in a fusion engineering facility, a critical review and analysis of the important FNT testing requirements have been performed. Several areas have been emphasized due to their strong impact on the design and cost of the test facility. These include (1) the length of the plasma burn and the mode of operation (pulsed vs. steady-state), and (2) the need for a tritium-producing blanket and its impact on the availability of the device. (orig.)

  6. A Quantitative Study of Faculty Perceptions and Attitudes on Asynchronous Virtual Teamwork Using the Technology Acceptance Model

    Science.gov (United States)

    Wolusky, G. Anthony

    2016-01-01

    This quantitative study used a web-based questionnaire to assess the attitudes and perceptions of online and hybrid faculty towards student-centered asynchronous virtual teamwork (AVT) using the technology acceptance model (TAM) of Davis (1989). AVT is online student participation in a team approach to problem-solving culminating in a written…

  7. Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion

    Science.gov (United States)

    Kojima, Jun J.; Fischer, David G.

    2012-01-01

    We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.

  8. The 'OMITRON' and 'MODEL OMITRON' proposed experiments

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-12-01

    In the present paper the main features of the OMITRON and MODEL OMITRON proposed high field tokamaks are illustrated. Of the two, OMITRON is an ambitious experiment, aimed at attaining plasma burning conditions. its key physics issues are discussed, and a comparison is carried out with corresponding physics features in ignition experiments such as IGNITOR and ITER. Chief asset and chief challenge - in both OMITRON and MODEL OMITRON is the conspicuous 20 Tesla toroidal field value on the plasma axis. The advanced features of engineering which consent such a reward in terms of toroidal magnet performance are discussed in convenient depth and detail. As for the small, propaedeutic device MODEL OMITRON among its goals one must rank the purpose of testing key engineering issues in vivo, which are vital for the larger and more expensive parent device. Besides that, however - as indicated by ad hoc performed scoping studies - the smaller machine is found capable also of a number of quite interesting physics investigations in its own right

  9. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  10. Modeling a ponded infiltration experiment at Yucca Mountain, NV

    International Nuclear Information System (INIS)

    Hudson, D.B.; Guertal, W.R.; Flint, A.L.

    1994-01-01

    Yucca Mountain, Nevada is being evaluated as a potential site for a geologic repository for high level radioactive waste. As part of the site characterization activities at Yucca Mountain, a field-scale ponded infiltration experiment was done to help characterize the hydraulic and infiltration properties of a layered dessert alluvium deposit. Calcium carbonate accumulation and cementation, heterogeneous layered profiles, high evapotranspiration, low precipitation, and rocky soil make the surface difficult to characterize.The effects of the strong morphological horizonation on the infiltration processes, the suitability of measured hydraulic properties, and the usefulness of ponded infiltration experiments in site characterization work were of interest. One-dimensional and two-dimensional radial flow numerical models were used to help interpret the results of the ponding experiment. The objective of this study was to evaluate the results of a ponded infiltration experiment done around borehole UE25 UZN number-sign 85 (N85) at Yucca Mountain, NV. The effects of morphological horizons on the infiltration processes, lateral flow, and measured soil hydaulic properties were studied. The evaluation was done by numerically modeling the results of a field ponded infiltration experiment. A comparison the experimental results and the modeled results was used to qualitatively indicate the degree to which infiltration processes and the hydaulic properties are understood. Results of the field characterization, soil characterization, borehole geophysics, and the ponding experiment are presented in a companion paper

  11. An experiment on a ball-lightning model

    International Nuclear Information System (INIS)

    Ignatovich, F.V.; Ignatovich, V.K.

    2010-01-01

    We discuss total internal reflection (TIR) from an interface between glass and gainy gaseous media and propose an experiment for strong light amplification related to investigation of a ball-lightning model

  12. Critical evaluation of the experiments and mathematical models for the determination of fission product release from the spherical fuel elements in cases of core heating accidents in modular HTR's

    International Nuclear Information System (INIS)

    Bailly, H.W.

    1987-01-01

    In this work, the thermal behaviour of modular reactors in cases of core heating accidents and the physical phenomena relevant for a release of radioactive materials from HTR fuel elements are explained as far as is necessary for understanding the work. The present mathematical models by which the release of radioactive materials from HTR fuel elements due to diffusion or breaking particles in cases of core heating accidents are also described, examined and evaluated with regard to their applicability to module reactors. The experiments used to verify the mathematical models are also evaluated. The mathematical models are in nearly all cases computer programs, which describe the complicated process of releasing radioactive materials quantitative mathematically. One should point out that these models are constantly being developed further, in line with the increasing amount of knowledge. To conclude the work, proposals are made for improving the certainty of information from experiments and mathematical models to determine the release behaviour of modular reactors. (orig./GL) [de

  13. Optimized slice-selective 1H NMR experiments combined with highly accurate quantitative 13C NMR using an internal reference method

    Science.gov (United States)

    Jézéquel, Tangi; Silvestre, Virginie; Dinis, Katy; Giraudeau, Patrick; Akoka, Serge

    2018-04-01

    Isotope ratio monitoring by 13C NMR spectrometry (irm-13C NMR) provides the complete 13C intramolecular position-specific composition at natural abundance. It represents a powerful tool to track the (bio)chemical pathway which has led to the synthesis of targeted molecules, since it allows Position-specific Isotope Analysis (PSIA). Due to the very small composition range (which represents the range of variation of the isotopic composition of a given nuclei) of 13C natural abundance values (50‰), irm-13C NMR requires a 1‰ accuracy and thus highly quantitative analysis by 13C NMR. Until now, the conventional strategy to determine the position-specific abundance xi relies on the combination of irm-MS (isotopic ratio monitoring Mass Spectrometry) and 13C quantitative NMR. However this approach presents a serious drawback since it relies on two different techniques and requires to measure separately the signal of all the carbons of the analyzed compound, which is not always possible. To circumvent this constraint, we recently proposed a new methodology to perform 13C isotopic analysis using an internal reference method and relying on NMR only. The method combines a highly quantitative 1H NMR pulse sequence (named DWET) with a 13C isotopic NMR measurement. However, the recently published DWET sequence is unsuited for samples with short T1, which forms a serious limitation for irm-13C NMR experiments where a relaxing agent is added. In this context, we suggest two variants of the DWET called Multi-WET and Profiled-WET, developed and optimized to reach the same accuracy of 1‰ with a better immunity towards T1 variations. Their performance is evaluated on the determination of the 13C isotopic profile of vanillin. Both pulse sequences show a 1‰ accuracy with an increased robustness to pulse miscalibrations compared to the initial DWET method. This constitutes a major advance in the context of irm-13C NMR since it is now possible to perform isotopic analysis with high

  14. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.

  15. Aespoe Pillar Stability Experiment. Summary of preparatory work and predictive modelling

    International Nuclear Information System (INIS)

    Andersson, J. Christer

    2004-11-01

    The Aespoe Pillar Stability Experiment, APSE, is a large scale rock mechanics experiment for research of the spalling process and the possibility for numerical modelling of it. The experiment can be summarized in three objectives: Demonstrate the current capability to predict spalling in a fractured rock mass; Demonstrate the effect of backfill (confining pressure) on the rock mass response; and Comparison of 2D and 3D mechanical and thermal predicting capabilities. This report is a summary of the works that has been performed in the experiment prior to the heating of the rock mass. The major activities that have been performed and are discussed herein are: 1) The geology of the experiment drift in general and the experiment volume in particular. 2) The design process of the experiment and thoughts behind some of the important decisions. 3) The monitoring programme and the supporting constructions for the instruments. 4) The numerical modelling, approaches taken and a summary of the predictions. In the end of the report there is a comparison of the results from the different models. Included is also a comparison of the time needed for building, realizing and make changes in the different models

  16. Studying learning in the healthcare setting: the potential of quantitative diary methods

    NARCIS (Netherlands)

    Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke

    2015-01-01

    Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples’ experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the

  17. The developments and verifications of trace model for IIST LOCA experiments

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, W. X. [Inst. of Nuclear Engineering and Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China); Wang, J. R.; Lin, H. T. [Inst. of Nuclear Energy Research, Taiwan, No. 1000, Wenhua Rd., Longtan Township, Taoyuan County 32546, Taiwan (China); Shih, C.; Huang, K. C. [Inst. of Nuclear Engineering and Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China); Dept. of Engineering and System Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China)

    2012-07-01

    The test facility IIST (INER Integral System Test) is a Reduced-Height and Reduced-Pressure (RHRP) integral test loop, which was constructed for the purposes of conducting thermal hydraulic and safety analysis of the Westinghouse three-loop PWR Nuclear Power Plants. The main purpose of this study is to develop and verify TRACE models of IIST through the IIST small break loss of coolant accident (SBLOCA) experiments. First, two different IIST TRACE models which include a pipe-vessel model and a 3-D vessel component model have been built. The steady state and transient calculation results show that both TRACE models have the ability to simulate the related IIST experiments. Comparing with IIST SBLOCA experiment data, the 3-D vessel component model has shown better simulation capabilities so that it has been chosen for all further thermal hydraulic studies. The second step is the sensitivity studies of two phase multiplier and subcooled liquid multiplier in choked flow model; and two correlation constants in CCFL model respectively. As a result, an appropriate set of multipliers and constants can be determined. In summary, a verified IIST TRACE model with 3D vessel component, and fine-tuned choked flow model and CCFL model is established for further studies on IIST experiments in the future. (authors)

  18. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  19. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  20. Quantitative theory of driven nonlinear brain dynamics.

    Science.gov (United States)

    Roberts, J A; Robinson, P A

    2012-09-01

    Strong periodic stimuli such as bright flashing lights evoke nonlinear responses in the brain and interact nonlinearly with ongoing cortical activity, but the underlying mechanisms for these phenomena are poorly understood at present. The dominant features of these experimentally observed dynamics are reproduced by the dynamics of a quantitative neural field model subject to periodic drive. Model power spectra over a range of drive frequencies show agreement with multiple features of experimental measurements, exhibiting nonlinear effects including entrainment over a range of frequencies around the natural alpha frequency f(α), subharmonic entrainment near 2f(α), and harmonic generation. Further analysis of the driven dynamics as a function of the drive parameters reveals rich nonlinear dynamics that is predicted to be observable in future experiments at high drive amplitude, including period doubling, bistable phase-locking, hysteresis, wave mixing, and chaos indicated by positive Lyapunov exponents. Moreover, photosensitive seizures are predicted for physiologically realistic model parameters yielding bistability between healthy and seizure dynamics. These results demonstrate the applicability of neural field models to the new regime of periodically driven nonlinear dynamics, enabling interpretation of experimental data in terms of specific generating mechanisms and providing new tests of the theory. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Quantitative acid-base physiology using the Stewart model. Does it improve our understanding of what is really wrong?

    NARCIS (Netherlands)

    Derksen, R.; Scheffer, G.J.; Hoeven, J.G. van der

    2006-01-01

    Traditional theories of acid-base balance are based on the Henderson-Hasselbalch equation to calculate proton concentration. The recent revival of quantitative acid-base physiology using the Stewart model has increased our understanding of complicated acid-base disorders, but has also led to several

  2. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    Science.gov (United States)

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  3. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  4. Chemical kinetics and combustion modeling

    Energy Technology Data Exchange (ETDEWEB)

    Miller, J.A. [Sandia National Laboratories, Livermore, CA (United States)

    1993-12-01

    The goal of this program is to gain qualitative insight into how pollutants are formed in combustion systems and to develop quantitative mathematical models to predict their formation rates. The approach is an integrated one, combining low-pressure flame experiments, chemical kinetics modeling, theory, and kinetics experiments to gain as clear a picture as possible of the process in question. These efforts are focused on problems involved with the nitrogen chemistry of combustion systems and on the formation of soot and PAH in flames.

  5. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  6. Transcriptome discovery in non-model wild fish species for the development of quantitative transcript abundance assays.

    Science.gov (United States)

    Hahn, Cassidy M; Iwanowicz, Luke R; Cornman, Robert S; Mazik, Patricia M; Blazer, Vicki S

    2016-12-01

    Environmental studies increasingly identify the presence of both contaminants of emerging concern (CECs) and legacy contaminants in aquatic environments; however, the biological effects of these compounds on resident fishes remain largely unknown. High throughput methodologies were employed to establish partial transcriptomes for three wild-caught, non-model fish species; smallmouth bass (Micropterus dolomieu), white sucker (Catostomus commersonii) and brown bullhead (Ameiurus nebulosus). Sequences from these transcriptome databases were utilized in the development of a custom nCounter CodeSet that allowed for direct multiplexed measurement of 50 transcript abundance endpoints in liver tissue. Sequence information was also utilized in the development of quantitative real-time PCR (qPCR) primers. Cross-species hybridization allowed the smallmouth bass nCounter CodeSet to be used for quantitative transcript abundance analysis of an additional non-model species, largemouth bass (Micropterus salmoides). We validated the nCounter analysis data system with qPCR for a subset of genes and confirmed concordant results. Changes in transcript abundance biomarkers between sexes and seasons were evaluated to provide baseline data on transcript modulation for each species of interest. Published by Elsevier Inc.

  7. Radon transport in fractured soil. Laboratory experiments and modelling

    International Nuclear Information System (INIS)

    Hoff, A.

    1997-10-01

    Radon (Rn-222) transport in fractured soil has been investigated by laboratory experiments and by modelling. Radon transport experiments have been performed with two sand columns (homogeneous and inhomogeneous) and one undisturbed clayey till column containing a net of preferential flow paths (root holes). A numerical model (the finite-element model FRACTRAN) and an analytic model (a pinhole model) have been applied in simulations if soil gas and radon transport in fractured soil. Experiments and model calculations are included in a discussion of radon entry rates into houses placed on fractured soil. The main conclusion is, that fractures does not in general alter transport of internally generated radon out of soil, when the pressure and flow conditions in the soil is comparable to the conditions prevailing under a house. This indicates the important result, that fractures in soil have no impact on radon entry into a house beyond that of an increased gas permeability, but a more thorough investigation of this subject is needed. Only in the case where the soil is exposed to large pressure gradients, relative to gradients induced by a house, may it be possible to observe effects of radon exchange between fractures and matrix. (au) 52 tabs., 60 ill., 5 refs

  8. Radon transport in fractured soil. Laboratory experiments and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Hoff, A

    1997-10-01

    Radon (Rn-222) transport in fractured soil has been investigated by laboratory experiments and by modelling. Radon transport experiments have been performed with two sand columns (homogeneous and inhomogeneous) and one undisturbed clayey till column containing a net of preferential flow paths (root holes). A numerical model (the finite-element model FRACTRAN) and an analytic model (a pinhole model) have been applied in simulations if soil gas and radon transport in fractured soil. Experiments and model calculations are included in a discussion of radon entry rates into houses placed on fractured soil. The main conclusion is, that fractures does not in general alter transport of internally generated radon out of soil, when the pressure and flow conditions in the soil is comparable to the conditions prevailing under a house. This indicates the important result, that fractures in soil have no impact on radon entry into a house beyond that of an increased gas permeability, but a more thorough investigation of this subject is needed. Only in the case where the soil is exposed to large pressure gradients, relative to gradients induced by a house, may it be possible to observe effects of radon exchange between fractures and matrix. (au) 52 tabs., 60 ill., 5 refs.

  9. Thermal-hydraulic Experiments for Advanced Physical Model Development

    International Nuclear Information System (INIS)

    Song, Chul Hwa; Baek, W. P.; Yoon, B. J.

    2010-04-01

    The improvement of prediction models is needed to enhance the safety analysis capability through the fine measurements of local phenomena. To improve the two-phase interfacial area transport model, the various experiments were carried out used SUBO and DOBO. 2x2 and 6x6 rod bundle test facilities were used for the experiment on the droplet behavior. The experiments on the droplet behavior inside a heated rod bundle were focused on the break-up of droplets induced by a spacer grid in a rod bundle geometry. The experiments used GIRLS and JICO and CFD analysis were carried out to comprehend the local condensation of steam jet, turbulent jet induced by condensation and the thermal mixing in a pool. An experimental database of the CHF (Critical Heat Flux) and PDO (Post-dryout) had been constructed. The mechanism of the heat transfer enhancement by surface modifications in nano-fluid was investigated in boiling mode and rapid quenching mode. The special measurement techniques were developed. They are Double -sensor optical void probe, Optic Rod, PIV technique and UBIM system

  10. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  11. Background modeling for the GERDA experiment

    Science.gov (United States)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  12. INTRAVAL Phase 2: Modeling testing at the Las Cruces Trench Site

    International Nuclear Information System (INIS)

    Hills, R.G.; Rockhold, M.; Xiang, J.; Scanlon, B.; Wittmeyer, G.

    1994-01-01

    Several field experiments have been performed by scientists from the University of Arizona and New Mexico State University at the Las Cruces Trench Site to provide data tc test deterministic and stochastic models for water flow and solute transport. These experiments were performed in collaboration with INTRAVAL, an international effort toward validation of geosphere models for the transport of radionuclides. During Phase I of INTRAVAL, qualitative comparisons between experimental data and model predictions were made using contour plots of water contents and solute concentrations. Detailed quantitative comparisons were not made. To provide data for more rigorous model testing, a third Las Cruces Trench experiment was designed by scientists from the University of Arizona and New Mexico State University. Modelers from the Center for Nuclear Waste Regulatory Analysis, Massachusetts Institute of Technology, New Mexico State University, Pacific Northwest Laboratory, and the University of Texas provided predictions of water flow and tritium transport to New Mexico State University for analysis. The corresponding models assumed soil characterizations ranging from uniform to deterministically heterogeneous to stochastic. This report presents detailed quantitative comparisons to field data

  13. Drilling in tempered glass – modelling and experiments

    DEFF Research Database (Denmark)

    Nielsen, Jens Henrik

    The present paper reports experimentally and numerically obtained results for the process of drilling in tempered glass. The experimental results are drilling depths on the edge in 19mm tempered glass with a known residual stress state measured by a scattered light polariscope. The experiments have...... been modelled using a state-of-the-art model and compared with satisfying result to the performed experiments. The numerical model has been used for a parametric study, investigating the redistribution of residual stresses during the process of drilling. This is done for investigating the possibility...... of applying forces in such holes and thereby being able to mechanically assemble tempered glass without the need of drilling holes before the tempering process. The paper is the result of currently ongoing research and the results should be treated as so....

  14. Quantitative evaluation and modeling of two-dimensional neovascular network complexity: the surface fractal dimension

    International Nuclear Information System (INIS)

    Grizzi, Fabio; Russo, Carlo; Colombo, Piergiuseppe; Franceschini, Barbara; Frezza, Eldo E; Cobos, Everardo; Chiriva-Internati, Maurizio

    2005-01-01

    Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. This paper introduces the surface fractal dimension (D s ) as a numerical index of the two-dimensional (2-D) geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. We show that D s significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth

  15. Quantitative Outline-based Shape Analysis and Classification of Planetary Craterforms using Supervised Learning Models

    Science.gov (United States)

    Slezak, Thomas Joseph; Radebaugh, Jani; Christiansen, Eric

    2017-10-01

    The shapes of craterform morphology on planetary surfaces provides rich information about their origins and evolution. While morphologic information provides rich visual clues to geologic processes and properties, the ability to quantitatively communicate this information is less easily accomplished. This study examines the morphology of craterforms using the quantitative outline-based shape methods of geometric morphometrics, commonly used in biology and paleontology. We examine and compare landforms on planetary surfaces using shape, a property of morphology that is invariant to translation, rotation, and size. We quantify the shapes of paterae on Io, martian calderas, terrestrial basaltic shield calderas, terrestrial ash-flow calderas, and lunar impact craters using elliptic Fourier analysis (EFA) and the Zahn and Roskies (Z-R) shape function, or tangent angle approach to produce multivariate shape descriptors. These shape descriptors are subjected to multivariate statistical analysis including canonical variate analysis (CVA), a multiple-comparison variant of discriminant analysis, to investigate the link between craterform shape and classification. Paterae on Io are most similar in shape to terrestrial ash-flow calderas and the shapes of terrestrial basaltic shield volcanoes are most similar to martian calderas. The shapes of lunar impact craters, including simple, transitional, and complex morphology, are classified with a 100% rate of success in all models. Multiple CVA models effectively predict and classify different craterforms using shape-based identification and demonstrate significant potential for use in the analysis of planetary surfaces.

  16. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    Science.gov (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  17. Non Linear Programming (NLP formulation for quantitative modeling of protein signal transduction pathways.

    Directory of Open Access Journals (Sweden)

    Alexander Mitsos

    Full Text Available Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i excessive CPU time requirements and ii loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  18. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  19. A New Tube Gastrostomy Model in Animal Experiments

    Directory of Open Access Journals (Sweden)

    Atakan Sezer

    2013-01-01

    Full Text Available Aim: The orogastric route is the most preferred application method in the vast majority of the animal experiments in which application can be achieved by adding the material to the water of the experiment animal, through an orogastric tube or with a surgically managed ostomy. Material and Method: This experiment was constructed with twelve male Sprague-Dawley rats which were randomly assigned to one of two groups consist of control group ( group C, n: 6 and tube gastrostomy group ( group TG, n: 6.A novel and simple gastrostomy tube was derivated from a silicone foley catheter. Tube gastrostomy apparatus was constituted with a silicone foley catheter (6 French. In the group TG an incision was performed, and the stomach was visualized. A 1 cm incision was made in the midline and opening of the peritoneum. Anchoring sutures were placed and anterior gastric wall was lifted. The gastric wall is then opened. The apparatus was placed into the stomach and pulled through from a tunnel under the skin and fixed to the lateral abdominal wall with a 2/0 silk suture. Result: The procedure was ended in the 10th day of experiment. No mortality was observed in group C. The rats were monitored daily and no abnormal behavior consists of self harming incision site, resistance to oral intake or attending to displace. There was statistically significant difference in increasing alanine transaminase level (p<0.05 and decrease in the total protein and body weight (p<0.05 at the group TG at the end of experiment. There was significant increase in urea levels in Group C (p<0.05 at the end of experiment. The statistically significant decrease was observed in the same period in group C between aspartate transaminase, albumin, total protein, and body weight (p<0.05.  Glucose (p=0.047 and aspartate transaminase (p=0.050 level decrease changes and weight loose (p=0.034 from preoperative period to the end of the experiment between gastrostomy and laparotomy groups were

  20. Creation of a simplified benchmark model for the neptunium sphere experiment

    International Nuclear Information System (INIS)

    Mosteller, Russell D.; Loaiza, David J.; Sanchez, Rene G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of 237 Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k eff by more than 1% Δk. This discrepancy supports the suspicion that better cross sections are needed for 237 Np.

  1. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  2. Quantitative MRI in refractory temporal lobe epilepsy: relationship with surgical outcomes

    Science.gov (United States)

    Bonilha, Leonardo

    2015-01-01

    Medically intractable temporal lobe epilepsy (TLE) remains a serious health problem. Across treatment centers, up to 40% of patients with TLE will continue to experience persistent postoperative seizures at 2-year follow-up. It is unknown why such a large number of patients continue to experience seizures despite being suitable candidates for resective surgery. Preoperative quantitative MRI techniques may provide useful information on why some patients continue to experience disabling seizures, and may have the potential to develop prognostic markers of surgical outcome. In this article, we provide an overview of how quantitative MRI morphometric and diffusion tensor imaging (DTI) data have improved the understanding of brain structural alterations in patients with refractory TLE. We subsequently review the studies that have applied quantitative structural imaging techniques to identify the neuroanatomical factors that are most strongly related to a poor postoperative prognosis. In summary, quantitative imaging studies strongly suggest that TLE is a disorder affecting a network of neurobiological systems, characterized by multiple and inter-related limbic and extra-limbic network abnormalities. The relationship between brain alterations and postoperative outcome are less consistent, but there is emerging evidence suggesting that seizures are less likely to remit with surgery when presurgical abnormalities are observed in the connectivity supporting brain regions serving as network nodes located outside the resected temporal lobe. Future work, possibly harnessing the potential from multimodal imaging approaches, may further elucidate the etiology of persistent postoperative seizures in patients with refractory TLE. Furthermore, quantitative imaging techniques may be explored to provide individualized measures of postoperative seizure freedom outcome. PMID:25853080

  3. Experiment planning using high-level component models at W7-X

    International Nuclear Information System (INIS)

    Lewerentz, Marc; Spring, Anett; Bluhm, Torsten; Heimann, Peter; Hennig, Christine; Kühner, Georg; Kroiss, Hugo; Krom, Johannes G.; Laqua, Heike; Maier, Josef; Riemann, Heike; Schacht, Jörg; Werner, Andreas; Zilker, Manfred

    2012-01-01

    Highlights: ► Introduction of models for an abstract description of fusion experiments. ► Component models support creating feasible experiment programs at planning time. ► Component models contain knowledge about physical and technical constraints. ► Generated views on models allow to present crucial information. - Abstract: The superconducting stellarator Wendelstein 7-X (W7-X) is a fusion device, which is capable of steady state operation. Furthermore W7-X is a very complex technical system. To cope with these requirements a modular and strongly hierarchical component-based control and data acquisition system has been designed. The behavior of W7-X is characterized by thousands of technical parameters of the participating components. The intended sequential change of those parameters during an experiment is defined in an experiment program. Planning such an experiment program is a crucial and complex task. To reduce the complexity an abstract, more physics-oriented high-level layer has been introduced earlier. The so-called high-level (physics) parameters are used to encapsulate technical details. This contribution will focus on the extension of this layer to a high-level component model. It completely describes the behavior of a component for a certain period of time. It allows not only defining simple value ranges but also complex dependencies between physics parameters. This can be: dependencies within components, dependencies between components or temporal dependencies. Component models can now be analyzed to generate various views of an experiment. A first implementation of such an analyze process is already finished. A graphical preview of a planned discharge can be generated from a chronological sequence of component models. This allows physicists to survey complex planned experiment programs at a glance.

  4. First experiments results about the engineering model of Rapsodie

    International Nuclear Information System (INIS)

    Chalot, A.; Ginier, R.; Sauvage, M.

    1964-01-01

    This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr

  5. Quantitative sample preparation of some heavy elements

    International Nuclear Information System (INIS)

    Jaffey, A.H.

    1977-01-01

    A discussion is given of some techniques that have been useful in quantitatively preparing and analyzing samples used in the half-life determinations of some plutonium and uranium isotopes. Application of these methods to the preparation of uranium and plutonium samples used in neutron experiments is discussed

  6. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  7. [Quantitative models between canopy hyperspectrum and its component features at apple tree prosperous fruit stage].

    Science.gov (United States)

    Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang

    2010-10-01

    Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.

  8. Exploring alternative models for sex-linked quantitative trait loci in outbred populations: application to an iberian x landrace pig intercross.

    OpenAIRE

    Pérez-Enciso, Miguel; Clop, Alex; Folch, Josep M; Sánchez, Armand; Oliver, Maria A; Ovilo, Cristina; Barragán, C; Varona, Luis; Noguera, José L

    2002-01-01

    We present a very flexible method that allows us to analyze X-linked quantitative trait loci (QTL) in crosses between outbred lines. The dosage compensation phenomenon is modeled explicitly in an identity-by-descent approach. A variety of models can be fitted, ranging from considering alternative fixed alleles within the founder breeds to a model where the only genetic variation is within breeds, as well as mixed models. Different genetic variances within each founder breed can be estimated. ...

  9. Argonne Bubble Experiment Thermal Model Development III

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-11

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development” and “Argonne Bubble Experiment Thermal Model Development II”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at beam power levels between 6 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was recorded. The previous report2 described the Monte-Carlo N-Particle (MCNP) calculations and Computational Fluid Dynamics (CFD) analysis performed on the as-built solution vessel geometry. The CFD simulations in the current analysis were performed using Ansys Fluent, Ver. 17.2. The same power profiles determined from MCNP calculations in earlier work were used for the 12 and 15 kW simulations. The primary goal of the current work is to calculate the temperature profiles for the 12 and 15 kW cases using reasonable estimates for the gas generation rate, based on images of the bubbles recorded during the irradiations. Temperature profiles resulting from the CFD calculations are compared to experimental measurements.

  10. The quantitative imaging network: the role of quantitative imaging in radiation therapy

    International Nuclear Information System (INIS)

    Tandon, Pushpa; Nordstrom, Robert J.; Clark, Laurence

    2014-01-01

    The potential value of modern medical imaging methods has created a need for mechanisms to develop, translate and disseminate emerging imaging technologies and, ideally, to quantitatively correlate those with other related laboratory methods, such as the genomics and proteomics analyses required to support clinical decisions. One strategy to meet these needs efficiently and cost effectively is to develop an international network to share and reach consensus on best practices, imaging protocols, common databases, and open science strategies, and to collaboratively seek opportunities to leverage resources wherever possible. One such network is the Quantitative Imaging Network (QIN) started by the National Cancer Institute, USA. The mission of the QIN is to improve the role of quantitative imaging for clinical decision making in oncology by the development and validation of data acquisition, analysis methods, and other quantitative imaging tools to predict or monitor the response to drug or radiation therapy. The network currently has 24 teams (two from Canada and 22 from the USA) and several associate members, including one from Tata Memorial Centre, Mumbai, India. Each QIN team collects data from ongoing clinical trials and develops software tools for quantitation and validation to create standards for imaging research, and for use in developing models for therapy response prediction and measurement and tools for clinical decision making. The members of QIN are addressing a wide variety of cancer problems (Head and Neck cancer, Prostrate, Breast, Brain, Lung, Liver, Colon) using multiple imaging modalities (PET, CT, MRI, FMISO PET, DW-MRI, PET-CT). (author)

  11. Computationally efficient and quantitatively accurate multiscale simulation of solid-solution strengthening by ab initio calculation

    International Nuclear Information System (INIS)

    Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg

    2015-01-01

    We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement

  12. Background modeling for the GERDA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Becerici-Schmidt, N. [Max-Planck-Institut für Physik, München (Germany); Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  13. Quantitative shear wave ultrasound elastography: initial experience in solid breast masses

    OpenAIRE

    Evans, Andrew; Whelehan, Patsy; Thomson, Kim; McLean, Denis; Brauer, Katrin; Purdie, Colin; Jordan, Lee; Baker, Lee; Thompson, Alastair

    2010-01-01

    Introduction Shear wave elastography is a new method of obtaining quantitative tissue elasticity data during breast ultrasound examinations. The aims of this study were (1) to determine the reproducibility of shear wave elastography (2) to correlate the elasticity values of a series of solid breast masses with histological findings and (3) to compare shear wave elastography with greyscale ultrasound for benign/malignant classification. Methods Using the Aixplorer® ultrasound system (SuperSoni...

  14. How predictive quantitative modelling of tissue organisation can inform liver disease pathogenesis.

    Science.gov (United States)

    Drasdo, Dirk; Hoehme, Stefan; Hengstler, Jan G

    2014-10-01

    From the more than 100 liver diseases described, many of those with high incidence rates manifest themselves by histopathological changes, such as hepatitis, alcoholic liver disease, fatty liver disease, fibrosis, and, in its later stages, cirrhosis, hepatocellular carcinoma, primary biliary cirrhosis and other disorders. Studies of disease pathogeneses are largely based on integrating -omics data pooled from cells at different locations with spatial information from stained liver structures in animal models. Even though this has led to significant insights, the complexity of interactions as well as the involvement of processes at many different time and length scales constrains the possibility to condense disease processes in illustrations, schemes and tables. The combination of modern imaging modalities with image processing and analysis, and mathematical models opens up a promising new approach towards a quantitative understanding of pathologies and of disease processes. This strategy is discussed for two examples, ammonia metabolism after drug-induced acute liver damage, and the recovery of liver mass as well as architecture during the subsequent regeneration process. This interdisciplinary approach permits integration of biological mechanisms and models of processes contributing to disease progression at various scales into mathematical models. These can be used to perform in silico simulations to promote unravelling the relation between architecture and function as below illustrated for liver regeneration, and bridging from the in vitro situation and animal models to humans. In the near future novel mechanisms will usually not be directly elucidated by modelling. However, models will falsify hypotheses and guide towards the most informative experimental design. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  15. Quantitative Structure-Retention Relationships (QSRR) for Chromatographic Separation of Disazo and Trisazo 4,4'-Diaminobenzanilide-based Dyes

    OpenAIRE

    Funar-Timofei, Simona; Fabian, Walter M. F.; Simu, Georgeta M.; Suzukic, Takahiro

    2006-01-01

    For a series of 23 disazo and trisazo 4,4'-diaminobenzanilide-based direct dye molecules, thechromatographic mobilities, extrapolated to modifier-free conditions (RM0 values), were determinedfrom reverse-phase thin-layer chromatography (RP-TLC) experiments. Traditional and rational QSAR/QSPR modelling techniques have been applied to find a quantitative structure-retention relationship (QSRR) for the dyes. Molecular dye structures were energy minimized by both molecular mechanics and quantum c...

  16. Preclinical Magnetic Resonance Fingerprinting (MRF) at 7 T: Effective Quantitative Imaging for Rodent Disease Models

    Science.gov (United States)

    Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A.; Vincent, Jason A.; Dell, Katherine M.; Drumm, Mitchell L.; Brady-Kalnay, Susann M.; Griswold, Mark A.; Flask, Chris A.; Lu, Lan

    2015-01-01

    High field, preclinical magnetic resonance imaging (MRI) scanners are now commonly used to quantitatively assess disease status and efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical, 7.0 T MRI implementation of the highly novel Magnetic Resonance Fingerprinting (MRF) methodology that has been previously described for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a Fast Imaging with Steady-state Free Precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 minutes. This initial high field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. PMID:25639694

  17. Identifying User Experience Goals for Interactive Climate Management Business Systems

    DEFF Research Database (Denmark)

    Clemmensen, Torkil; Barlow, Stephanie

    2013-01-01

    This paper presents findings from interpretative phenomenological interviews about the user experience of interactive climate management with six growers and crop consultants. The focus of user experience research has been on quantitative studies of consumers’ initial usage experiences, for example...... of mobile phones or e-commerce websites. In contrast, this empirical paper provides an example of how to capture user experience in work contexts and with a qualitative methodology. We present a model of the essence of the emotional user experience of interactive climate management. Then we suggest...... of interactive climate management in this and other domains. The overall aim with the paper is to take the concept of user experience into the IS community and to describe and understand what are individual workers’ positive emotional use experiences when interacting with workplace systems....

  18. The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience

    OpenAIRE

    Maria Kon

    2015-01-01

    Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities. I show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition. Here I discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the ...

  19. [A new method of processing quantitative PCR data].

    Science.gov (United States)

    Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun

    2003-05-01

    Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.

  20. Physically based dynamic run-out modelling for quantitative debris flow risk assessment: a case study in Tresenda, northern Italy

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.

    2014-01-01

    Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014

  1. Aspects of quantitative secondary ion mass spectrometry

    International Nuclear Information System (INIS)

    Grauer, R.

    1982-05-01

    Parameters which have an influence on the formation of secondary ions by ion bombardment of a solid matrix are discussed. Quantitative SIMS-analysis with the help of calibration standards necessitates a stringent control of these parameters. This is particularly valid for the oxygen partial pressure which for metal analysis has to be maintained constant also under ultra high vacuum. The performance of the theoretical LTE-model (Local Thermal Equilibrium) using internal standards will be compared with the analysis with the help of external standards. The LTE-model does not satisfy the requirements for quantitative analysis. (Auth.)

  2. Quantitative aspects of the cytochemical demonstration of glucose-6-phosphate dehydrogenase with tetranitro BT studied in a model system of polyacrylamide films

    NARCIS (Netherlands)

    van Noorden, C. J.; Tas, J.

    1980-01-01

    The cytochemical determination of the activity of glucose-6-phosphate dehydrogenase (G6PDH) with tetranitro blue tetrazolium (TNBT) was studied with model films of polyacrylamide gel incorporating purified enzyme. This model system enabled a quantitative study to be made of different parameters

  3. TU-G-303-00: Radiomics: Advances in the Use of Quantitative Imaging Used for Predictive Modeling

    International Nuclear Information System (INIS)

    2015-01-01

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  4. TU-G-303-00: Radiomics: Advances in the Use of Quantitative Imaging Used for Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-06-15

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  5. Integrated Physics-based Modeling and Experiments for Improved Prediction of Combustion Dynamics in Low-Emission Systems

    Science.gov (United States)

    Anderson, William E.; Lucht, Robert P.; Mongia, Hukam

    2015-01-01

    Concurrent simulation and experiment was undertaken to assess the ability of a hybrid RANS-LES model to predict combustion dynamics in a single-element lean direct-inject (LDI) combustor showing self-excited instabilities. High frequency pressure modes produced by Fourier and modal decomposition analysis were compared quantitatively, and trends with equivalence ratio and inlet temperature were compared qualitatively. High frequency OH PLIF and PIV measurements were also taken. Submodels for chemical kinetics and primary and secondary atomization were also tested against the measured behavior. For a point-wise comparison, the amplitudes matched within a factor of two. The dependence on equivalence ratio was matched. Preliminary results from simulation using an 18-reaction kinetics model indicated instability amplitudes closer to measurement. Analysis of the simulations suggested a band of modes around 1400 Hz were due to a vortex bubble breakdown and a band of modes around 6 kHz were due to a precessing vortex core hydrodynamic instability. The primary needs are directly coupled and validated ab initio models of the atomizer free surface flow and the primary atomization processes, and more detailed study of the coupling between the 3D swirling flow and the local thermoacoustics in the diverging venturi section.

  6. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography

    International Nuclear Information System (INIS)

    Precht, H.; Kitslaar, P.H.; Broersen, A.; Gerke, O.; Dijkstra, J.; Thygesen, J.; Egstrup, K.; Lambrechtsen, J.

    2017-01-01

    Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model-based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) images on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods: Three patients had three independent dose reduced CCTA performed and reconstructed with 30% ASIR (CTDI vol at 6.7 mGy), 60% ASIR (CTDI vol 4.3 mGy) and Veo (CTDI vol at 1.9 mGy). Coronary plaque analysis was performed for each measured CCTA volumes, plaque burden and intensities. Results: Plaque volume and plaque burden show a decreasing tendency from ASIR to Veo as median volume for ASIR is 314 mm 3 and 337 mm 3 –252 mm 3 for Veo and plaque burden is 42% and 44% for ASIR to 39% for Veo. The lumen and vessel volume decrease slightly from 30% ASIR to 60% ASIR with 498 mm 3 –391 mm 3 for lumen volume and vessel volume from 939 mm 3 to 830 mm 3 . The intensities did not change overall between the different reconstructions for either lumen or plaque. Conclusion: We found a tendency of decreasing plaque volumes and plaque burden but no change in intensities with the use of low dose Veo CCTA (1.9 mGy) compared to dose reduced ASIR CCTA (6.7 mGy & 4.3 mGy), although more studies are warranted. - Highlights: • Veo decrease plaque volumes and plaque burden using low-dose CCTA. • Moving from ASIR 30%, ASIR 60% to Veo did not appear to influence the plaque intensities. • Studies including larger sample size are needed to investigate the effect on plaque.

  7. Bayesian model calibration of ramp compression experiments on Z

    Science.gov (United States)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Spectroscopic Tools for Quantitative Studies of DNA Structure and Dynamics

    DEFF Research Database (Denmark)

    Preus, Søren

    The main objective of this thesis is to develop quantitative fluorescence-based, spectroscopic tools for probing the 3D structure and dynamics of DNA and RNA. The thesis is founded on six peer-reviewed papers covering mainly the development, characterization and use of fluorescent nucleobase...... analogues. In addition, four software packages is presented for the simulation and quantitative analysis of time-resolved and steady-state UV-Vis absorption and fluorescence experiments....

  9. Thermal experiments in the ADS target model

    International Nuclear Information System (INIS)

    Efanov, A.D.; Orlov, Yu.I.; Sorokin, A.P.; Ivanov, E.F.; Bogoslovskaya, G.P.; Li, N.

    2002-01-01

    Experiments on the development of the target heat model project and method of investigation into heat exchange in target were conducted with the aim of analysis of thermomechanical and strength characteristics of device; experimental data on the temperature distribution in coolant and membrane were obtained. Obtained data demonstrate that the temperature heterogeneity of membrane and coolant are connected with the temperature distribution variability near the membrane. Peculiarities of the experiment are noted: maximal temperature of oscillations at high point of the membrane, and power bearing temperature oscillations in the range 0 - 1 Hz [ru

  10. Quantitative analysis of prediction models for hot cracking in ...

    Indian Academy of Sciences (India)

    A RodrМguez-Prieto

    2017-11-16

    Nov 16, 2017 ... enhancing safety margins and adding greater precision to quantitative accident prediction [45]. One deterministic methodology is the stringency level (SL) approach, which is recognized as a valuable decision tool in the selection of standardized materials specifications to prevent potential failures [3].

  11. PARAMETRIC MODELING, CREATIVITY, AND DESIGN: TWO EXPERIENCES WITH ARCHITECTURE’ STUDENTS

    Directory of Open Access Journals (Sweden)

    Wilson Florio

    2012-02-01

    Full Text Available The aim of this article is to reflect on the use of the parametric modeling in two didactic experiences. The first experiment involved resources of the Paracloud program and its relation with the Rhinoceros program, that resulted in the production of physical models produced with the aid of the laser cutting. In the second experiment, the students had produced algorithms in the Grasshopper, resulting in families of structures and coverings. The study objects are both the physical models and digital algorithms resultants from this experimentation. For the analysis and synthesis of the results, we adopted four important assumptions: 1. the value of attitudes and environment of work; 2. the importance of experimentation and improvisation; 3. understanding of the design process as a situated act and as a ill-defined problem; 4. the inclusion of creative and critical thought in the disciplines. The results allow us to affirm that the parametric modeling stimulates creativity, therefore allowing combination of different parameters, that result in unexpected discoveries. Keywords: Teach-Learning, Parametric Modeling, Laser Cutter, Grasshopper, Design Process, Creativity.

  12. Quantitative study of quasi-one-dimensional Bose gas experiments via the stochastic Gross-Pitaevskii equation

    International Nuclear Information System (INIS)

    Cockburn, S. P.; Gallucci, D.; Proukakis, N. P.

    2011-01-01

    The stochastic Gross-Pitaevskii equation is shown to be an excellent model for quasi-one-dimensional Bose gas experiments, accurately reproducing the in situ density profiles recently obtained in the experiments of Trebbia et al.[Phys. Rev. Lett. 97, 250403 (2006)] and van Amerongen et al.[Phys. Rev. Lett. 100, 090402 (2008)] and the density fluctuation data reported by Armijo et al.[Phys. Rev. Lett. 105, 230402 (2010)]. To facilitate such agreement, we propose and implement a quasi-one-dimensional extension to the one-dimensional stochastic Gross-Pitaevskii equation for the low-energy, axial modes, while atoms in excited transverse modes are treated as independent ideal Bose gases.

  13. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    International Nuclear Information System (INIS)

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    A quantitative local computed tomography combined with data-constrained modelling has been developed. The method could improve distinctly the spatial resolution and the composition resolution in a sample larger than the field of view, for quantitative characterization of three-dimensional distributions of material compositions and void. Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials

  14. Confessions of a Quantitative Educational Researcher Trying to Teach Qualitative Research.

    Science.gov (United States)

    Stallings, William M.

    1995-01-01

    Describes one quantitative educational researcher's experiences teaching qualitative research, the approach used in classes, and the successes and failures. These experiences are examined from the viewpoint of a traditionally trained professor who has now been called upon to master and teach qualitative research. (GR)

  15. Modified Ashworth Scale (MAS) Model based on Clinical Data Measurement towards Quantitative Evaluation of Upper Limb Spasticity

    Science.gov (United States)

    Puzi, A. Ahmad; Sidek, S. N.; Mat Rosly, H.; Daud, N.; Yusof, H. Md

    2017-11-01

    Spasticity is common symptom presented amongst people with sensorimotor disabilities. Imbalanced signals from the central nervous systems (CNS) which are composed of the brain and spinal cord to the muscles ultimately leading to the injury and death of motor neurons. In clinical practice, the therapist assesses muscle spasticity using a standard assessment tool like Modified Ashworth Scale (MAS), Modified Tardiue Scale (MTS) or Fugl-Meyer Assessment (FMA). This is done subjectively based on the experience and perception of the therapist subjected to the patient fatigue level and body posture. However, the inconsistency in the assessment is prevalent and could affect the efficacy of the rehabilitation process. Thus, the aim of this paper is to describe the methodology of data collection and the quantitative model of MAS developed to satisfy its description. Two subjects with MAS of 2 and 3 spasticity levels were involved in the clinical data measurement. Their level of spasticity was verified by expert therapist using current practice. Data collection was established using mechanical system equipped with data acquisition system and LABVIEW software. The procedure engaged repeated series of flexion of the affected arm that was moved against the platform using a lever mechanism and performed by the therapist. The data was then analyzed to investigate the characteristics of spasticity signal in correspondence to the MAS description. Experimental results revealed that the methodology used to quantify spasticity satisfied the MAS tool requirement according to the description. Therefore, the result is crucial and useful towards the development of formal spasticity quantification model.

  16. BBN based Quantitative Assessment of Software Design Specification

    International Nuclear Information System (INIS)

    Eom, Heung-Seop; Park, Gee-Yong; Kang, Hyun-Gook; Kwon, Kee-Choon; Chang, Seung-Cheol

    2007-01-01

    Probabilistic Safety Assessment (PSA), which is one of the important methods in assessing the overall safety of a nuclear power plant (NPP), requires quantitative reliability information of safety-critical software, but the conventional reliability assessment methods can not provide enough information for PSA of a NPP. Therefore current PSA which includes safety-critical software does not usually consider the reliability of the software or uses arbitrary values for it. In order to solve this situation this paper proposes a method that can produce quantitative reliability information of safety-critical software for PSA by making use of Bayesian Belief Networks (BBN). BBN has generally been used to model an uncertain system in many research fields including the safety assessment of software. The proposed method was constructed by utilizing BBN which can combine the qualitative and the quantitative evidence relevant to the reliability of safety critical software. The constructed BBN model can infer a conclusion in a formal and a quantitative way. A case study was carried out with the proposed method to assess the quality of software design specification (SDS) of safety-critical software that will be embedded in a reactor protection system. The intermediate V and V results of the software design specification were used as inputs to the BBN model

  17. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  18. Deployment of e-health services - a business model engineering strategy.

    Science.gov (United States)

    Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R

    2010-01-01

    We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.

  19. Josephson cross-sectional model experiment

    International Nuclear Information System (INIS)

    Ketchen, M.B.; Herrell, D.J.; Anderson, C.J.

    1985-01-01

    This paper describes the electrical design and evaluation of the Josephson cross-sectional model (CSM) experiment. The experiment served as a test vehicle to verify the operation at liquid-helium temperatures of Josephson circuits integrated in a package environment suitable for high-performance digital applications. The CSM consisted of four circuit chips assembled on two cards in a three-dimensional card-on-board package. The chips (package) were fabricated in a 2.5-μm (5-μm) minimum linewidth Pb-alloy technology. A hierarchy of solder and pluggable connectors was used to attach the parts together and to provide electrical interconnections between parts. A data path which simulated a jump control sequence and a cache access in each machine cycle was successfully operated with cycle times down to 3.7 ns. The CSM incorporated the key components of the logic, power, and package of a prototype Josephson signal processor and demonstrated the feasibility of making such a processor with a sub-4-ns cycle time

  20. Quantitative radiography

    International Nuclear Information System (INIS)

    Brase, J.M.; Martz, H.E.; Waltjen, K.E.; Hurd, R.L.; Wieting, M.G.

    1986-01-01

    Radiographic techniques have been used in nondestructive evaluation primarily to develop qualitative information (i.e., defect detection). This project applies and extends the techniques developed in medical x-ray imaging, particularly computed tomography (CT), to develop quantitative information (both spatial dimensions and material quantities) on the three-dimensional (3D) structure of solids. Accomplishments in FY 86 include (1) improvements in experimental equipment - an improved microfocus system that will give 20-μm resolution and has potential for increased imaging speed, and (2) development of a simple new technique for displaying 3D images so as to clearly show the structure of the object. Image reconstruction and data analysis for a series of synchrotron CT experiments conducted by LLNL's Chemistry Department has begun

  1. Validation of Quantitative Structure-Activity Relationship (QSAR Model for Photosensitizer Activity Prediction

    Directory of Open Access Journals (Sweden)

    Sharifuddin M. Zain

    2011-11-01

    Full Text Available Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA method. Based on the method, r2 value, r2 (CV value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 µM to 7.04 µM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set.

  2. Means-End based Functional Modeling for Intelligent Control: Modeling and Experiments with an Industrial Heat Pump System

    DEFF Research Database (Denmark)

    Saleem, Arshad

    2007-01-01

    The purpose of this paper is to present a Multilevel Flow Model (MFM) of an industrial heat pump system and its use for diagnostic reasoning. MFM is functional modeling language supporting an explicit means-ends intelligent control strategy for large industrial process plants. The model is used...... in several diagnostic experiments analyzing different fault scenarios. The model and results of the experiments are explained and it is shown how MFM based intelligent modeling and automated reasoning can improve the fault diagnosis process significantly....

  3. Improvement of the ID model for quantitative network data

    DEFF Research Database (Denmark)

    Sørensen, Peter Borgen; Damgaard, Christian Frølund; Dupont, Yoko Luise

    2015-01-01

    Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. Such artefa......Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks......)1. This presentation will illustrate the application of the ID method based on a data set which consists of counts of visits by 152 pollinator species to 16 plant species. The method is based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi...... reproduce the high number of zero valued cells in the data set and mimic the sampling distribution. 1 Sørensen et al, Journal of Pollination Ecology, 6(18), 2011, pp129-139...

  4. Quantitative transmission electron microscopy at atomic resolution

    International Nuclear Information System (INIS)

    Allen, L J; D'Alfonso, A J; Forbes, B D; Findlay, S D; LeBeau, J M; Stemmer, S

    2012-01-01

    In scanning transmission electron microscopy (STEM) it is possible to operate the microscope in bright-field mode under conditions which, by the quantum mechanical principle of reciprocity, are equivalent to those in conventional transmission electron microscopy (CTEM). The results of such an experiment will be presented which are in excellent quantitative agreement with theory for specimens up to 25 nm thick. This is at variance with the large contrast mismatch (typically between two and five) noted in equivalent CTEM experiments. The implications of this will be discussed.

  5. INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Petr Hejtmánek

    2017-12-01

    Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.

  6. Comparison of longitudinal excursion of a nerve-phantom model using quantitative ultrasound imaging and motion analysis system methods: A convergent validity study.

    Science.gov (United States)

    Paquette, Philippe; El Khamlichi, Youssef; Lamontagne, Martin; Higgins, Johanne; Gagnon, Dany H

    2017-08-01

    Quantitative ultrasound imaging is gaining popularity in research and clinical settings to measure the neuromechanical properties of the peripheral nerves such as their capability to glide in response to body segment movement. Increasing evidence suggests that impaired median nerve longitudinal excursion is associated with carpal tunnel syndrome. To date, psychometric properties of longitudinal nerve excursion measurements using quantitative ultrasound imaging have not been extensively investigated. This study investigates the convergent validity of the longitudinal nerve excursion by comparing measures obtained using quantitative ultrasound imaging with those determined with a motion analysis system. A 38-cm long rigid nerve-phantom model was used to assess the longitudinal excursion in a laboratory environment. The nerve-phantom model, immersed in a 20-cm deep container filled with a gelatin-based solution, was moved 20 times using a linear forward and backward motion. Three light-emitting diodes were used to record nerve-phantom excursion with a motion analysis system, while a 5-cm linear transducer allowed simultaneous recording via ultrasound imaging. Both measurement techniques yielded excellent association ( r  = 0.99) and agreement (mean absolute difference between methods = 0.85 mm; mean relative difference between methods = 7.48 %). Small discrepancies were largely found when larger excursions (i.e. > 10 mm) were performed, revealing slight underestimation of the excursion by the ultrasound imaging analysis software. Quantitative ultrasound imaging is an accurate method to assess the longitudinal excursion of an in vitro nerve-phantom model and appears relevant for future research protocols investigating the neuromechanical properties of the peripheral nerves.

  7. Quantitative Analysis of Intra Urban Growth Modeling using socio economic agents by combining cellular automata model with agent based model

    Science.gov (United States)

    Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.

    2017-12-01

    Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes

  8. Modelization of ratcheting in biaxial experiments

    International Nuclear Information System (INIS)

    Guionnet, C.

    1989-08-01

    A new unified viscoplastic constitutive equation has been developed in order to interpret ratcheting experiments on mechanical structures of fast reactors. The model is based essentially on a generalized Armstrong Frederick equation for the kinematic variable; the coefficients of the dynamic recovery term in this equation is a function of both instantaneous and accumulated inelastic strain which is allowed to vary in an appropriate manner in order to reproduce the experimental ratcheting rate. The validity of the model is verified by comparing predictions with experimental results for austenitic stainless steel (17-12 SPH) tubular specimens subjected to cyclic torsional loading under constant tensile stress at 600 0 C [fr

  9. The retention characteristics of nonvolatile SNOS memory transistors in a radiation environment: Experiment and model

    International Nuclear Information System (INIS)

    McWhorter, P.J.; Miller, S.L.; Dellin, T.A.; Axness, C.L.

    1987-01-01

    Experimental data and a model to accurately and quantitatively predict the data are presented for retention of SNOS memory devices over a wide range of dose rates. A wide range of SNOS stack geometries are examined. The model is designed to aid in screening nonvolatile memories for use in a radiation environment

  10. Matrix viscoplasticity and its shielding by active mechanics in microtissue models: experiments and mathematical modeling

    Science.gov (United States)

    Liu, Alan S.; Wang, Hailong; Copeland, Craig R.; Chen, Christopher S.; Shenoy, Vivek B.; Reich, Daniel H.

    2016-01-01

    The biomechanical behavior of tissues under mechanical stimulation is critically important to physiological function. We report a combined experimental and modeling study of bioengineered 3D smooth muscle microtissues that reveals a previously unappreciated interaction between active cell mechanics and the viscoplastic properties of the extracellular matrix. The microtissues’ response to stretch/unstretch actuations, as probed by microcantilever force sensors, was dominated by cellular actomyosin dynamics. However, cell lysis revealed a viscoplastic response of the underlying model collagen/fibrin matrix. A model coupling Hill-type actomyosin dynamics with a plastic perfectly viscoplastic description of the matrix quantitatively accounts for the microtissue dynamics, including notably the cells’ shielding of the matrix plasticity. Stretch measurements of single cells confirmed the active cell dynamics, and were well described by a single-cell version of our model. These results reveal the need for new focus on matrix plasticity and its interactions with active cell mechanics in describing tissue dynamics. PMID:27671239

  11. Hohlraum modeling for opacity experiments on the National Ignition Facility

    Science.gov (United States)

    Dodd, E. S.; DeVolder, B. G.; Martin, M. E.; Krasheninnikova, N. S.; Tregillis, I. L.; Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; Moore, A. S.; Kline, J. L.; Johns, H. M.; Liedahl, D. A.; Cardenas, T.; Olson, R. E.; Wilde, B. H.; Urbatsch, T. J.

    2018-06-01

    This paper discusses the modeling of experiments that measure iron opacity in local thermodynamic equilibrium (LTE) using laser-driven hohlraums at the National Ignition Facility (NIF). A previous set of experiments fielded at Sandia's Z facility [Bailey et al., Nature 517, 56 (2015)] have shown up to factors of two discrepancies between the theory and experiment, casting doubt on the validity of the opacity models. The purpose of the new experiments is to make corroborating measurements at the same densities and temperatures, with the initial measurements made at a temperature of 160 eV and an electron density of 0.7 × 1022 cm-3. The X-ray hot spots of a laser-driven hohlraum are not in LTE, and the iron must be shielded from a direct line-of-sight to obtain the data [Perry et al., Phys. Rev. B 54, 5617 (1996)]. This shielding is provided either with the internal structure (e.g., baffles) or external wall shapes that divide the hohlraum into a laser-heated portion and an LTE portion. In contrast, most inertial confinement fusion hohlraums are simple cylinders lacking complex gold walls, and the design codes are not typically applied to targets like those for the opacity experiments. We will discuss the initial basis for the modeling using LASNEX, and the subsequent modeling of five different hohlraum geometries that have been fielded on the NIF to date. This includes a comparison of calculated and measured radiation temperatures.

  12. Quantitative MR imaging in fracture dating--Initial results.

    Science.gov (United States)

    Baron, Katharina; Neumayer, Bernhard; Widek, Thomas; Schick, Fritz; Scheicher, Sylvia; Hassler, Eva; Scheurer, Eva

    2016-04-01

    For exact age determinations of bone fractures in a forensic context (e.g. in cases of child abuse) improved knowledge of the time course of the healing process and use of non-invasive modern imaging technology is of high importance. To date, fracture dating is based on radiographic methods by determining the callus status and thereby relying on an expert's experience. As a novel approach, this study aims to investigate the applicability of magnetic resonance imaging (MRI) for bone fracture dating by systematically investigating time-resolved changes in quantitative MR characteristics after a fracture event. Prior to investigating fracture healing in children, adults were examined for this study in order to test the methodology for this application. Altogether, 31 MR examinations in 17 subjects (♀: 11 ♂: 6; median age 34 ± 15 y, scanned 1-5 times over a period of up to 200 days after the fracture event) were performed on a clinical 3T MR scanner (TimTrio, Siemens AG, Germany). All subjects were treated conservatively for a fracture in either a long bone or in the collar bone. Both, qualitative and quantitative MR measurements were performed in all subjects. MR sequences for a quantitative measurement of relaxation times T1 and T2 in the fracture gap and musculature were applied. Maps of quantitative MR parameters T1, T2, and magnetisation transfer ratio (MTR) were calculated and evaluated by investigating changes over time in the fractured area by defined ROIs. Additionally, muscle areas were examined as reference regions to validate this approach. Quantitative evaluation of 23 MR data sets (12 test subjects, ♀: 7 ♂: 5) showed an initial peak in T1 values in the fractured area (T1=1895 ± 607 ms), which decreased over time to a value of 1094 ± 182 ms (200 days after the fracture event). T2 values also peaked for early-stage fractures (T2=115 ± 80 ms) and decreased to 73 ± 33 ms within 21 days after the fracture event. After that time point, no

  13. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  14. Energy Dispersive Spectrometry and Quantitative Analysis Short Course. Introduction to X-ray Energy Dispersive Spectrometry and Quantitative Analysis

    Science.gov (United States)

    Carpenter, Paul; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    This course will cover practical applications of the energy-dispersive spectrometer (EDS) to x-ray microanalysis. Topics covered will include detector technology, advances in pulse processing, resolution and performance monitoring, detector modeling, peak deconvolution and fitting, qualitative and quantitative analysis, compositional mapping, and standards. An emphasis will be placed on use of the EDS for quantitative analysis, with discussion of typical problems encountered in the analysis of a wide range of materials and sample geometries.

  15. Quantitative Verification and Synthesis of Attack-Defence Scenarios

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Nielson, Flemming; Parker, David

    2016-01-01

    analysis of quantitative properties of complex attack-defence scenarios, using an extension of attack-defence trees which models temporal ordering of actions and allows explicit dependencies in the strategies adopted by attackers and defenders. We adopt a game-theoretic approach, translating attack...... which guarantee or optimise some quantitative property, such as the probability of a successful attack, the expected cost incurred, or some multi-objective trade-off between the two. We implement our approach, building upon the PRISM-games model checker, and apply it to a case study of an RFID goods...

  16. Role of modeling in the design of experiments in carbohydrate metabolism

    International Nuclear Information System (INIS)

    Foster, D.M.; Hetenyi, G. Jr.

    1991-01-01

    Most publications on modeling present only the final product without describing the details as to how they were developed and tested. It is, however, by model development and testing that the true power of modeling as a research tool reveals itself. The purpose of this paper is to present a behind the scenes look at a set of experiments designed to study carbon atom transport in gluconeogenesis. In particular, it will be shown how the development of one model led to hypotheses for which another set of experiments was designed. The model which resulted from the second study contained in turn a number of new hypotheses for which further experiments remain to be designed. The second model supported the findings of the first, and yielded deeper insights into the exchange of carbon atoms among three metabolites. It is hoped this illustration will encourage other investigators to take advantage of the utilitarian value of modeling not only as a parameter generating tool, but also as a true research tool which can aid significantly to extract more information from available data

  17. A method for energy window optimization for quantitative tasks that includes the effects of model-mismatch on bias: application to Y-90 bremsstrahlung SPECT imaging

    International Nuclear Information System (INIS)

    Rong Xing; Du Yong; Frey, Eric C

    2012-01-01

    calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical 90 Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated 90 Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100–160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates. (paper)

  18. A method for energy window optimization for quantitative tasks that includes the effects of model-mismatch on bias: application to Y-90 bremsstrahlung SPECT imaging.

    Science.gov (United States)

    Rong, Xing; Du, Yong; Frey, Eric C

    2012-06-21

    derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.

  19. Stratospheric controlled perturbation experiment (SCoPEx): overview, status, and results from related laboratory experiments

    Science.gov (United States)

    Keith, D.; Dykema, J. A.; Keutsch, F. N.

    2017-12-01

    Stratospheric Controlled Perturbation Experiment (SCoPEx), is a scientific experiment to advance understanding of stratospheric aerosols. It aims to make quantitative measurements of aerosol microphysics and atmospheric chemistry to improve large-scale models used to assess the risks and benefits of solar geoengineering. A perturbative experiment requires: (a) means to create a well-mixed, small perturbed volume, and (b) observation of time evolution of chemistry and aerosols in the volume. SCoPEx will used a propelled balloon gondola containing all instruments and drive system. The propeller wake forms a well-mixed volume (roughly 1 km long and 100 meters in diameter) that serves as an experimental `beaker' into which aerosols (e.g., budget, etc; (d) results from CFD simulation of propeller wake and simulation of chemistry and aerosol microphysics; and finally (e) proposed concept of operations and schedule. We will also provide an overview of the plans for governance including management of health safety and environmental risks, transparency, public engagement, and larger questions about governance of solar geoengineering experiments. Finally, we will briefly present results of laboratory experiments of the interaction of chemical such as ClONO2 and HCl on particle surfaces relevant for stratospheric solar geoengineering.

  20. Validation of single-fluid and two-fluid magnetohydrodynamic models of the helicity injected torus spheromak experiment with the NIMROD code

    International Nuclear Information System (INIS)

    Akcay, Cihan; Victor, Brian S.; Jarboe, Thomas R.; Kim, Charlson C.

    2013-01-01

    We present a comparison study of 3-D pressureless resistive MHD (rMHD) and 3-D presureless two-fluid MHD models of the Helicity Injected Torus with Steady Inductive helicity injection (HIT-SI). HIT-SI is a current drive experiment that uses two geometrically asymmetric helicity injectors to generate and sustain toroidal plasmas. The comparable size of the collisionless ion skin depth d i to the resistive skin depth predicates the importance of the Hall term for HIT-SI. The simulations are run with NIMROD, an initial-value, 3-D extended MHD code. The modeled plasma density and temperature are assumed uniform and constant. The helicity injectors are modeled as oscillating normal magnetic and parallel electric field boundary conditions. The simulations use parameters that closely match those of the experiment. The simulation output is compared to the formation time, plasma current, and internal and surface magnetic fields. Results of the study indicate 2fl-MHD shows quantitative agreement with the experiment while rMHD only captures the qualitative features. The validity of each model is assessed based on how accurately it reproduces the global quantities as well as the temporal and spatial dependence of the measured magnetic fields. 2fl-MHD produces the current amplification (I tor /I inj ) and formation time τ f demonstrated by HIT-SI with similar internal magnetic fields. rMHD underestimates (I tor /I inj ) and exhibits much a longer τ f . Biorthogonal decomposition (BD), a powerful mathematical tool for reducing large data sets, is employed to quantify how well the simulations reproduce the measured surface magnetic fields without resorting to a probe-by-probe comparison. BD shows that 2fl-MHD captures the dominant surface magnetic structures and the temporal behavior of these features better than rMHD

  1. Quantitative correlations between collision induced dissociation mass spectrometry coupled with electrospray ionization or atmospheric pressure chemical ionization mass spectrometry - Experiment and theory

    Science.gov (United States)

    Ivanova, Bojidarka; Spiteller, Michael

    2018-04-01

    The problematic that we consider in this paper treats the quantitative correlation model equations between experimental kinetic and thermodynamic parameters of coupled electrospray ionization (ESI) mass spectrometry (MS) or atmospheric pressure chemical ionization (APCI) mass spectrometry with collision induced dissociation mass spectrometry, accounting for the fact that the physical phenomena and mechanisms of ESI- and APCI-ion formation are completely different. There are described forty two fragment reactions of three analytes under independent ESI- and APCI-measurements. The developed new quantitative models allow us to study correlatively the reaction kinetics and thermodynamics using the methods of mass spectrometry, which complementary application with the methods of the quantum chemistry provide 3D structural information of the analytes. Both static and dynamic quantum chemical computations are carried out. The object of analyses are [2,3-dimethyl-4-(4-methyl-benzoyl)-2,3-di-p-tolyl-cyclobutyl]-p-tolyl-methanone (1) and the polycyclic aromatic hydrocarbons derivatives of dibenzoperylen (2) and tetrabenzo [a,c,fg,op]naphthacene (3), respectively. As far as (1) is known to be a product of [2π+2π] cycloaddition reactions of chalcone (1,3-di-p-tolyl-propenone), however producing cyclic derivatives with different stereo selectivity, so that the study provide crucial data about the capability of mass spectrometry to provide determine the stereo selectivity of the analytes. This work also first provides quantitative treatment of the relations '3D molecular/electronic structures'-'quantum chemical diffusion coefficient'-'mass spectrometric diffusion coefficient', thus extending the capability of the mass spectrometry for determination of the exact 3D structure of the analytes using independent measurements and computations of the diffusion coefficients. The determination of the experimental diffusion parameters is carried out within the 'current monitoring method

  2. The APOSTEL recommendations for reporting quantitative optical coherence tomography studies

    DEFF Research Database (Denmark)

    Cruz-Herranz, Andrés; Balk, Lisanne J; Oberwahrenbrock, Timm

    2016-01-01

    OBJECTIVE: To develop consensus recommendations for reporting of quantitative optical coherence tomography (OCT) study results. METHODS: A panel of experienced OCT researchers (including 11 neurologists, 2 ophthalmologists, and 2 neuroscientists) discussed requirements for performing and reporting...... quantitative analyses of retinal morphology and developed a list of initial recommendations based on experience and previous studies. The list of recommendations was subsequently revised during several meetings of the coordinating group. RESULTS: We provide a 9-point checklist encompassing aspects deemed...... relevant when reporting quantitative OCT studies. The areas covered are study protocol, acquisition device, acquisition settings, scanning protocol, funduscopic imaging, postacquisition data selection, postacquisition data analysis, recommended nomenclature, and statistical analysis. CONCLUSIONS...

  3. Laboratory astrophysics. Model experiments of astrophysics with large-scale lasers

    International Nuclear Information System (INIS)

    Takabe, Hideaki

    2012-01-01

    I would like to review the model experiment of astrophysics with high-power, large-scale lasers constructed mainly for laser nuclear fusion research. The four research directions of this new field named 'Laser Astrophysics' are described with four examples mainly promoted in our institute. The description is of magazine style so as to be easily understood by non-specialists. A new theory and its model experiment on the collisionless shock and particle acceleration observed in supernova remnants (SNRs) are explained in detail and its result and coming research direction are clarified. In addition, the vacuum breakdown experiment to be realized with the near future ultra-intense laser is also introduced. (author)

  4. A quantitative modeling of the contributions of localized surface plasmon resonance and interband transitions to absorbance of gold nanoparticles

    International Nuclear Information System (INIS)

    Zhu, S.; Chen, T. P.; Liu, Y. C.; Liu, Y.; Fung, S.

    2012-01-01

    A quantitative modeling of the contributions of localized surface plasmon resonance (LSPR) and interband transitions to absorbance of gold nanoparticles has been achieved based on Lorentz–Drude dispersion function and Maxwell-Garnett effective medium approximation. The contributions are well modeled with three Lorentz oscillators. Influence of the structural properties of the gold nanoparticles on the LSPR and interband transitions has been examined. In addition, the dielectric function of the gold nanoparticles has been extracted from the modeling to absorbance, and it is found to be consistent with the result yielded from the spectroscopic ellipsometric analysis.

  5. Toward a quantitative theory of food consumption choices and body weight.

    Science.gov (United States)

    Buttet, Sebastien; Dolar, Veronika

    2015-04-01

    We propose a calibrated dynamic model of food consumption choices and body weight to study changes in daily caloric intake, weight, and the away-from-home share of calories consumed by adult men and women in the U.S. during the period between 1971 and 2006. Calibration reveals substantial preference heterogeneity between men and women. For example, utility losses stemming from weight gains are ten times greater for women compared to men. Counterfactual experiments show that changes in food prices and household income account for half of the increase in weight of adult men, but only a small fraction of women's weight. We argue that quantitative models of food consumption choices and body weight have a unique role to play in future research in the economics of obesity. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. PIQMIe: a web server for semi-quantitative proteomics data management and analysis.

    Science.gov (United States)

    Kuzniar, Arnold; Kanaar, Roland

    2014-07-01

    We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Model of CSR Induced Bursts in Slicing Experiments

    International Nuclear Information System (INIS)

    Stupakov, G.; Heifets, S.

    2006-01-01

    We suggest a model describing the CSR bursts observed in recent experiments at the Advanced Light Source at the LBL. The model is based on the linear theory of the CSR instability in electron rings. We describe how an initial perturbation of the beam generated by the laser pulse evolves in time when the beam is unstable due to the CSR wakefield

  8. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  9. Brand Experience in Banking Industry: Direct and Indirect Relationship to Loyalty

    Directory of Open Access Journals (Sweden)

    Nuri WULANDARI

    2016-02-01

    Full Text Available In marketing, the meaning of value is rapidly shifting from service and relationships to experiences. It is believed that the traditional value proposition is no longer effective to compete in the market and to gain customer loyalty. By adapting the brand experience model, this study tries to validate the model in the banking industry, which is currently facing intense competition to retain customers. The brand experience construct is tested for its direct and indirect relationship toward loyalty. It is postulated that satisfaction and brand authenticity might be instrumental in mediating brand experience to loyalty. Research was conducted via in-depth interview and quantitative survey, targeting bank customers in Jakarta. The result confirmed that brand experience has direct and indirect contribution to loyalty in significant and positive manner. The research contributes in validating previous studies with a rare emphasis in banking sector. The result implies that brand experience is an important value to lead to customer loyalty in this area and subject to growing research on experience marketing.

  10. Applied quantitative finance

    CERN Document Server

    Chen, Cathy; Overbeck, Ludger

    2017-01-01

    This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular b ut are theoretically challenging...

  11. Real Patient and its Virtual Twin: Application of Quantitative Systems Toxicology Modelling in the Cardiac Safety Assessment of Citalopram.

    Science.gov (United States)

    Patel, Nikunjkumar; Wiśniowska, Barbara; Jamei, Masoud; Polak, Sebastian

    2017-11-27

    A quantitative systems toxicology (QST) model for citalopram was established to simulate, in silico, a 'virtual twin' of a real patient to predict the occurrence of cardiotoxic events previously reported in patients under various clinical conditions. The QST model considers the effects of citalopram and its most notable electrophysiologically active primary (desmethylcitalopram) and secondary (didesmethylcitalopram) metabolites, on cardiac electrophysiology. The in vitro cardiac ion channel current inhibition data was coupled with the biophysically detailed model of human cardiac electrophysiology to investigate the impact of (i) the inhibition of multiple ion currents (I Kr , I Ks , I CaL ); (ii) the inclusion of metabolites in the QST model; and (iii) unbound or total plasma as the operating drug concentration, in predicting clinically observed QT prolongation. The inclusion of multiple ion channel current inhibition and metabolites in the simulation with unbound plasma citalopram concentration provided the lowest prediction error. The predictive performance of the model was verified with three additional therapeutic and supra-therapeutic drug exposure clinical cases. The results indicate that considering only the hERG ion channel inhibition of only the parent drug is potentially misleading, and the inclusion of active metabolite data and the influence of other ion channel currents should be considered to improve the prediction of potential cardiac toxicity. Mechanistic modelling can help bridge the gaps existing in the quantitative translation from preclinical cardiac safety assessment to clinical toxicology. Moreover, this study shows that the QST models, in combination with appropriate drug and systems parameters, can pave the way towards personalised safety assessment.

  12. Numerical modeling of the 2017 active seismic infrasound balloon experiment

    Science.gov (United States)

    Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.

    2017-12-01

    We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.

  13. Implementation of an object oriented track reconstruction model into multiple LHC experiments*

    Science.gov (United States)

    Gaines, Irwin; Gonzalez, Saul; Qian, Sijin

    2001-10-01

    An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.

  14. Studying learning in the healthcare setting: the potential of quantitative diary methods.

    Science.gov (United States)

    Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke

    2015-08-01

    Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples' experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the field of medical education. Quantitative diary methods offer several methodological advantages, such as measuring aspects of learning with great detail, accuracy and authenticity. Moreover, they enable researchers to study how and under which conditions learning in the health care setting occurs and in which way learning can be promoted. Hence, quantitative diary methods may contribute to theory development and the optimization of teaching methods in medical education.

  15. Some observations concerning blade-element-momentum (BEM) methods and vortex wake methods, including numerical experiments with a simple vortex model

    Energy Technology Data Exchange (ETDEWEB)

    Snel, H. [Netherlands Energy Research Foundation ECN, Renewable Energy, Wind Energy (Netherlands)

    1997-08-01

    Recently the Blade Element Momentum (BEM) method has been made more versatile. Inclusion of rotational effects on time averaged profile coefficients have improved its achievements for performance calculations in stalled flow. Time dependence as a result of turbulent inflow, pitching actions and yawed operation is now treated more correctly (although more improvement is needed) than before. It is of interest to note that adaptations in modelling of unsteady or periodic induction stem from qualitative and quantitative insights obtained from free vortex models. Free vortex methods and further into the future Navier Stokes (NS) calculations, together with wind tunnel and field experiments, can be very useful in enhancing the potential of BEM for aero-elastic response calculations. It must be kept in mind however that extreme caution must be used with free vortex methods, as will be discussed in the following chapters. A discussion of the shortcomings and the strength of BEM and of vortex wake models is given. Some ideas are presented on how BEM might be improved without too much loss of efficiency. (EG)

  16. Tropospheric jet response to Antarctic ozone depletion: An update with Chemistry-Climate Model Initiative (CCMI) models

    Science.gov (United States)

    Son, Seok-Woo; Han, Bo-Reum; Garfinkel, Chaim I.; Kim, Seo-Yeon; Park, Rokjin; Abraham, N. Luke; Akiyoshi, Hideharu; Archibald, Alexander T.; Butchart, N.; Chipperfield, Martyn P.; Dameris, Martin; Deushi, Makoto; Dhomse, Sandip S.; Hardiman, Steven C.; Jöckel, Patrick; Kinnison, Douglas; Michou, Martine; Morgenstern, Olaf; O’Connor, Fiona M.; Oman, Luke D.; Plummer, David A.; Pozzer, Andrea; Revell, Laura E.; Rozanov, Eugene; Stenke, Andrea; Stone, Kane; Tilmes, Simone; Yamashita, Yousuke; Zeng, Guang

    2018-05-01

    The Southern Hemisphere (SH) zonal-mean circulation change in response to Antarctic ozone depletion is re-visited by examining a set of the latest model simulations archived for the Chemistry-Climate Model Initiative (CCMI) project. All models reasonably well reproduce Antarctic ozone depletion in the late 20th century. The related SH-summer circulation changes, such as a poleward intensification of westerly jet and a poleward expansion of the Hadley cell, are also well captured. All experiments exhibit quantitatively the same multi-model mean trend, irrespective of whether the ocean is coupled or prescribed. Results are also quantitatively similar to those derived from the Coupled Model Intercomparison Project phase 5 (CMIP5) high-top model simulations in which the stratospheric ozone is mostly prescribed with monthly- and zonally-averaged values. These results suggest that the ozone-hole-induced SH-summer circulation changes are robust across the models irrespective of the specific chemistry-atmosphere-ocean coupling.

  17. Total protein analysis as a reliable loading control for quantitative fluorescent Western blotting.

    Directory of Open Access Journals (Sweden)

    Samantha L Eaton

    Full Text Available Western blotting has been a key technique for determining the relative expression of proteins within complex biological samples since the first publications in 1979. Recent developments in sensitive fluorescent labels, with truly quantifiable linear ranges and greater limits of detection, have allowed biologists to probe tissue specific pathways and processes with higher resolution than ever before. However, the application of quantitative Western blotting (QWB to a range of healthy tissues and those from degenerative models has highlighted a problem with significant consequences for quantitative protein analysis: how can researchers conduct comparative expression analyses when many of the commonly used reference proteins (e.g. loading controls are differentially expressed? Here we demonstrate that common controls, including actin and tubulin, are differentially expressed in tissues from a wide range of animal models of neurodegeneration. We highlight the prevalence of such alterations through examination of published "-omics" data, and demonstrate similar responses in sensitive QWB experiments. For example, QWB analysis of spinal cord from a murine model of Spinal Muscular Atrophy using an Odyssey scanner revealed that beta-actin expression was decreased by 19.3±2% compared to healthy littermate controls. Thus, normalising QWB data to β-actin in these circumstances could result in 'skewing' of all data by ∼20%. We further demonstrate that differential expression of commonly used loading controls was not restricted to the nervous system, but was also detectable across multiple tissues, including bone, fat and internal organs. Moreover, expression of these "control" proteins was not consistent between different portions of the same tissue, highlighting the importance of careful and consistent tissue sampling for QWB experiments. Finally, having illustrated the problem of selecting appropriate single protein loading controls, we demonstrate

  18. Application of the Reina Trust and Betrayal Model to the experience of pediatric critical care clinicians.

    Science.gov (United States)

    Rushton, Cynda Hylton; Reina, Michelle L; Francovich, Christopher; Naumann, Phyllis; Reina, Dennis S

    2010-07-01

    Trust is essential in the workplace, yet no systematic studies of trust among pediatric critical care professionals have been done. To determine the feasibility of measuring trust in a pediatric intensive care unit by using established scales from the corporate world and to determine what behaviors build, break, and rebuild trust. The Reina Trust and Betrayal Model was used to explore contractual, competence, and communication trust. Nurses and physicians in a pediatric intensive care unit completed online surveys to measure organizational, team, and patient trust. Quantitative data from 3 standard survey instruments and qualitative responses to 3 open-ended questions were analyzed and compared. Quantitative data from all 3 instruments indicated moderate to high levels of trust; scores for competence and contractual trust were higher than scores for communication trust. Scores indicated agreement on behaviors that build trust, such as pointing out risky situations to each other, actively striving to build supportive and productive relationships, and giving and receiving constructive feedback. Foremost among trust-breaking behaviors was gossip, which was more troublesome to respondents with longer experience in critical care. Responses to the open-ended questions underscored these themes. The most frequently cited items included encouraging mutually serving intentions, sharing information, and involving and seeking the input of others. The Reina trust scales and open-ended questions are feasible and applicable to pediatric critical care units, and data collected with these instruments are useful in determining what behaviors build, break, and rebuild trust among staff.

  19. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Science.gov (United States)

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  20. Progress towards in vitro quantitative imaging of human femur using compound quantitative ultrasonic tomography

    International Nuclear Information System (INIS)

    Lasaygues, Philippe; Ouedraogo, Edgard; Lefebvre, Jean-Pierre; Gindre, Marcel; Talmant, Marilyne; Laugier, Pascal

    2005-01-01

    The objective of this study is to make cross-sectional ultrasonic quantitative tomography of the diaphysis of long bones. Ultrasonic propagation in bones is affected by the severe mismatch between the acoustic properties of this biological solid and those of the surrounding soft medium, namely, the soft tissues in vivo or water in vitro. Bone imaging is then a nonlinear inverse-scattering problem. In this paper, we showed that in vitro quantitative images of sound velocities in a human femur cross section could be reconstructed by combining ultrasonic reflection tomography (URT), which provides images of the macroscopic structure of the bone, and ultrasonic transmission tomography (UTT), which provides quantitative images of the sound velocity. For the shape, we developed an image-processing tool to extract the external and internal boundaries and cortical thickness measurements. For velocity mapping, we used a wavelet analysis tool adapted to ultrasound, which allowed us to detect precisely the time of flight from the transmitted signals. A brief review of the ultrasonic tomography that we developed using correction algorithms of the wavepaths and compensation procedures are presented. Also shown are the first results of our analyses on models and specimens of long bone using our new iterative quantitative protocol