WorldWideScience

Sample records for quantitative dft modeling

  1. Quantitative DFT modeling of product concentration in organometallic reactions: Cu-mediated pentafluoroethylation of benzoic acid chlorides as a case study.

    Science.gov (United States)

    Jover, Jesús

    2017-11-08

    DFT calculations are widely used for computing properties, reaction mechanisms and energy profiles in organometallic reactions. A qualitative agreement between the experimental and the calculated results seems to usually be enough to validate a computational methodology but recent advances in computation indicate that a nearly quantitative agreement should be possible if an appropriate DFT study is carried out. Final percent product concentrations, often reported as yields, are by far the most commonly reported properties in experimental metal-mediated synthesis studies but reported DFT studies have not focused on predicting absolute product amounts. The recently reported stoichiometric pentafluoroethylation of benzoic acid chlorides (R-C 6 H 4 COCl) with [(phen)Cu(PPh 3 )C 2 F 5 ] (phen = 1,10-phenanthroline, PPh 3 = triphenylphosphine) has been used as a case study to check whether the experimental product concentrations can be reproduced by any of the most popular DFT approaches with high enough accuracy. To this end, the Gibbs energy profile for the pentafluoroethylation of benzoic acid chloride has been computed using 14 different DFT methods. These computed Gibbs energy profiles have been employed to build kinetic models predicting the final product concentration in solution. The best results are obtained with the D3-dispersion corrected B3LYP functional, which has been successfully used afterwards to model the reaction outcomes of other simple (R = o-Me, p-Me, p-Cl, p-F, etc.) benzoic acid chlorides. The product concentrations of more complex reaction networks in which more than one position of the substrate may be activated by the copper catalyst (R = o-Br and p-I) are also predicted appropriately.

  2. Quantitative structure-activity relationships of the antimalarial agent artemisinin and some of its derivatives - a DFT approach.

    Science.gov (United States)

    Rajkhowa, Sanchaita; Hussain, Iftikar; Hazarika, Kalyan K; Sarmah, Pubalee; Deka, Ramesh Chandra

    2013-09-01

    Artemisinin form the most important class of antimalarial agents currently available, and is a unique sesquiterpene peroxide occurring as a constituent of Artemisia annua. Artemisinin is effectively used in the treatment of drug-resistant Plasmodium falciparum and because of its rapid clearance of cerebral malaria, many clinically useful semisynthetic drugs for severe and complicated malaria have been developed. However, one of the major disadvantages of using artemisinins is their poor solubility either in oil or water and therefore, in order to overcome this difficulty many derivatives of artemisinin were prepared. A comparative study on the chemical reactivity of artemisinin and some of its derivatives is performed using density functional theory (DFT) calculations. DFT based global and local reactivity descriptors, such as hardness, chemical potential, electrophilicity index, Fukui function, and local philicity calculated at the optimized geometries are used to investigate the usefulness of these descriptors for understanding the reactive nature and reactive sites of the molecules. Multiple regression analysis is applied to build up a quantitative structure-activity relationship (QSAR) model based on the DFT based descriptors against the chloroquine-resistant, mefloquine-sensitive Plasmodium falciparum W-2 clone.

  3. The power of joint application of LEED and DFT in quantitative surface structure determination

    International Nuclear Information System (INIS)

    Heinz, K; Hammer, L; Mueller, S

    2008-01-01

    It is demonstrated for several cases that the joint application of low-energy electron diffraction (LEED) and structural calculations using density functional theory (DFT) can retrieve the correct surface structure even though single application of both methods fails. On the experimental side (LEED) the failure can be due to the simultaneous presence of weak and very strong scatterers or to an insufficient data base leaving different structures with the same quality of fit between experimental data and calculated model intensities. On the theory side (DFT) it can be difficult to predict the coverage of an adsorbate or two different structures may own almost the same total energy, but only one of the structures is assumed in experiment due to formation kinetics. It is demonstrated how in the different cases the joint application of both methods-which yield about the same structural precision-offers a way out of the dilemma

  4. Investigating actinide compounds within a hybrid MCSCF-DFT model

    International Nuclear Information System (INIS)

    Fromager, E.; Jensen, H.J.A.; Wahlin, P.; Real, F.; Wahlgren, U.

    2007-01-01

    Complete text of publication follows: Investigations of actinide chemistry with quantum chemical methods still remain a complicated task since it requires an accurate and efficient treatment of the environment (crystal or solvent) as well as relativistic and electron correlation effects. Concerning the latter, the current correlated methods, based on either Density-Functional Theory (DFT) or Wave-Function Theory (WFT), have their advantages and drawbacks. On the one hand, Kohn-Sham DFT (KS-DFT) calculates the dynamic correlation quite accurately and at a fairly low computational cost. However, it does not treat adequately the static correlation, which is significant in some actinide compounds because of the near-degeneracy of the 5f orbitals: a first example is the bent geometry obtained in KS-DFT(B3LYP) for the neptunyl ion NpO 2 3+ , which is found to be linear within a Multi-Configurational Self-Consistent Field (MCSCF) model [1]. A second one is the stable and bent geometry obtained in KS-DFT(B3LYP) for the plutonyl ion PuO 2 4+ , which disintegrates at the MCSCF level [1]. On the other hand, WFT can describe the static correlation, using for example a MCSCF model, but then an important part of the dynamic correlation has to be neglected. This can be recovered with perturbation-theory based methods like for example CASPT2 or NEVPT2, but their computational complexity prevents large scale calculations. It is therefore of great interest to develop a hybrid MCSCF-DFT model which combines the best of both WFT and DFT approaches. The merge of WFT and DFT can be achieved by splitting the two-electron interaction into long-range and short-range parts [2]. The long-range part is then treated by WFT and the short-range part by DFT. We use the so-called 'erf' long-range interaction erf(μr 12 )/r 12 , which is based on the standard error function, and where μ is a free parameter which controls the long/short-range decomposition. The newly proposed recipe for the

  5. DFT application for chlorin derivatives photosensitizer drugs modeling

    Science.gov (United States)

    Machado, Neila; Carvalho, B. G.; Téllez Soto, C. A.; Martin, A. A.; Favero, P. P.

    2018-04-01

    Photodynamic therapy is an alternative form of cancer treatment that meets the desire for a less aggressive approach to the body. It is based on the interaction between a photosensitizer, activating light, and molecular oxygen. This interaction results in a cascade of reactions that leads to localized cell death. Many studies have been conducted to discover an ideal photosensitizer, which aggregates all the desirable characteristics of a potent cell killer and generates minimal side effects. Using Density Functional Theory (DFT) implemented in the program Vienna Ab-initio Simulation Package, new chlorin derivatives with different functional groups were simulated to evaluate the different absorption wavelengths to permit resonant absorption with the incident laser. Gaussian 09 program was used to determine vibrational wave numbers and Natural Bond Orbitals. The chosen drug with the best characteristics for the photosensitizer was a modified model of the original chlorin, which was called as Thiol chlorin. According to our calculations it is stable and is 19.6% more efficient at optical absorption in 708 nm in comparison to the conventional chlorin e6. Vibrational modes, optical and electronic properties were predicted. In conclusion, this study is an attempt to improve the development of new photosensitizer drugs through computational methods that save time and contribute to decrease the numbers of animals for model application.

  6. A quantitative analysis of weak intermolecular interactions & quantum chemical calculations (DFT) of novel chalcone derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Chavda, Bhavin R., E-mail: chavdabhavin9@gmail.com; Dubey, Rahul P.; Patel, Urmila H. [Department of Physics, Sardar Patel University, Vallabh Vidyanagar-388120, Gujarat (India); Gandhi, Sahaj A. [Bhavan’s Shri I.L. Pandya Arts-Science and Smt. J.M. shah Commerce College, Dakar, Anand -388001, Gujarat, Indian (India); Barot, Vijay M. [P. G. Center in Chemistry, Smt. S. M. Panchal Science College, Talod, Gujarat 383 215 (India)

    2016-05-06

    The novel chalcone derivatives have widespread applications in material science and medicinal industries. The density functional theory (DFT) is used to optimized the molecular structure of the three chalcone derivatives (M-I, II, III). The observed discrepancies between the theoretical and experimental (X-ray data) results attributed to different environments of the molecules, the experimental values are of the molecule in solid state there by subjected to the intermolecular forces, like non-bonded hydrogen bond interactions, where as isolated state in gas phase for theoretical studies. The lattice energy of all the molecules have been calculated using PIXELC module in Coulomb –London –Pauli (CLP) package and is partitioned into corresponding coulombic, polarization, dispersion and repulsion contributions. Lattice energy data confirm and strengthen the finding of the X-ray results that the weak but significant intermolecular interactions like C-H…O, Π- Π and C-H… Π plays an important role in the stabilization of crystal packing.

  7. Chlorophenols Sorption on Multi-Walled Carbon Nanotubes: DFT Modeling and Structure-Property Relationship Analysis

    OpenAIRE

    Watkins, Marquita; Sizochenko, Natalia; Moore, Quentarius; Golebiowski, Marek; Leszczynska, Danuta; Leszczynski, Jerzy

    2017-01-01

    Presence of chlorophenols in drinking water could be hazardous to human health. Optimization and computational modeling of experimental conditions of adsorption lead to understanding the mechanisms of this process and to creating the efficient experimental equipment. In the current study, we investigated multi-walled carbon nanotubes by means of density functional theory (DFT) approach. This is applied to study selected types of interactions between six solvents, five types of nanotubes, and ...

  8. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  9. Electromechanical and Chemical Sensing at the Nanoscale: DFT and Transport Modeling

    Science.gov (United States)

    Maiti, Amitesh

    Of the many nanoelectronic applications proposed for near to medium-term commercial deployment, sensors based on carbon nanotubes (CNT) and metal-oxide nanowires are receiving significant attention from researchers. Such devices typically operate on the basis of the changes of electrical response characteristics of the active component (CNT or nanowire) when subjected to an externally applied mechanical stress or the adsorption of a chemical or bio-molecule. Practical development of such technologies can greatly benefit from quantum chemical modeling based on density functional theory (DFT), and from electronic transport modeling based on non-equilibrium Green's function (NEGF). DFT can compute useful quantities like possible bond-rearrangements, binding energy, charge transfer, and changes to the electronic structure, while NEGF can predict changes in electronic transport behavior and contact resistance. Effects of surrounding medium and intrinsic structural defects can also be taken into account. In this work we review some recent DFT and transport investigations on (1) CNT-based nano-electromechanical sensors (NEMS) and (2) gas-sensing properties of CNTs and metal-oxide nanowires. We also briefly discuss our current understanding of CNT-metal contacts which, depending upon the metal, the deposition technique, and the masking method can have a significant effect on device performance.

  10. Tetramer model of leukoemeraldine-emeraldine electrochemistry in the presence of trihalogenoacetic acids. DFT approach.

    Science.gov (United States)

    Barbosa, Nuno Almeida; Grzeszczuk, Maria; Wieczorek, Robert

    2015-01-15

    First results of the application of the DFT computational approach to the reversible electrochemistry of polyaniline are presented. A tetrameric chain was used as the simplest model of the polyaniline polymer species. The system under theoretical investigation involved six tetramer species, two electrons, and two protons, taking part in 14 elementary reactions. Moreover, the tetramer species were interacting with two trihalogenoacetic acid molecules. Trifluoroacetic, trichloroacetic, and tribromoacetic acids were found to impact the redox transformation of polyaniline as shown by cyclic voltammetry. The theoretical approach was considered as a powerful tool for investigating the main factors of importance for the experimental behavior. The DFT method provided molecular structures, interaction energies, and equilibrium energies of all of the tetramer-acid complexes. Differences between the energies of the isolated tetramer species and their complexes with acids are discussed in terms of the elementary reactions, that is, ionization potentials and electron affinities, equilibrium constants, electrode potentials, and reorganization energies. The DFT results indicate a high impact of the acid on the reorganization energy of a particular elementary electron-transfer reaction. The ECEC oxidation path was predicted by the calculations. The model of the reacting system must be extended to octamer species and/or dimeric oligomer species to better approximate the real polymer situation.

  11. Combining DFT, Cluster Expansions, and KMC to Model Point Defects in Alloys

    Science.gov (United States)

    Modine, N. A.; Wright, A. F.; Lee, S. R.; Foiles, S. M.; Battaile, C. C.; Thomas, J. C.; van der Ven, A.

    In an alloy, defect energies are sensitive to the occupations of nearby atomic sites, which leads to a distribution of defect properties. When radiation-induced defects diffuse from their initially non-equilibrium locations, this distribution becomes time-dependent. The defects can become trapped in energetically favorable regions of the alloy leading to a diffusion rate that slows dramatically with time. Density Functional Theory (DFT) allows the accurate determination of ground state and transition state energies for a defect in a particular alloy environment but requires thousands of processing hours for each such calculation. Kinetic Monte-Carlo (KMC) can be used to model defect diffusion and the changing distribution of defect properties but requires energy evaluations for millions of local environments. We have used the Cluster Expansion (CE) formalism to ``glue'' together these seemingly incompatible methods. The occupation of each alloy site is represented by an Ising-like variable, and products of these variables are used to expand quantities of interest. Once a CE is fit to a training set of DFT energies, it allows very rapid evaluation of the energy for an arbitrary configuration, while maintaining the accuracy of the underlying DFT calculations. These energy evaluations are then used to drive our KMC simulations. We will demonstrate the application of our DFT/MC/KMC approach to model thermal and carrier-induced diffusion of intrinsic point defects in III-V alloys. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE.

  12. Prediction Errors of Molecular Machine Learning Models Lower than Hybrid DFT Error.

    Science.gov (United States)

    Faber, Felix A; Hutchison, Luke; Huang, Bing; Gilmer, Justin; Schoenholz, Samuel S; Dahl, George E; Vinyals, Oriol; Kearnes, Steven; Riley, Patrick F; von Lilienfeld, O Anatole

    2017-11-14

    We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of 13 electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to ∼118k distinct molecules. Molecular structures and properties at the hybrid density functional theory (DFT) level of theory come from the QM9 database [ Ramakrishnan et al. Sci. Data 2014 , 1 , 140022 ] and include enthalpies and free energies of atomization, HOMO/LUMO energies and gap, dipole moment, polarizability, zero point vibrational energy, heat capacity, and the highest fundamental vibrational frequency. Various molecular representations have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR), and two types of neural networks, graph convolutions (GC) and gated graph networks (GG). Out-of sample errors are strongly dependent on the choice of representation and regressor and molecular property. Electronic properties are typically best accounted for by MG and GC, while energetic properties are better described by HDAD and KRR. The specific combinations with the lowest out-of-sample errors in the ∼118k training set size limit are (free) energies and enthalpies of atomization (HDAD/KRR), HOMO/LUMO eigenvalue and gap (MG/GC), dipole moment (MG/GC), static polarizability (MG/GG), zero point vibrational energy (HDAD/KRR), heat capacity at room temperature (HDAD/KRR), and highest fundamental vibrational frequency (BAML/RF). We present numerical

  13. Effective one-body potential of DFT plus correlated kinetic energy density for two-electron spherical model atoms

    International Nuclear Information System (INIS)

    March, N.H.; Ludena, Eduardo V.

    2004-01-01

    For three model problems concerning two-electron spin-compensated ground states with spherical density, the third-order linear homogeneous differential equation constructed for the determination of ρ(r) is used here in conjunction with the von Weizsacker functional to characterize the one-body potential of density functional theory (DFT). Correlated von Weizsacker-type terms are compared to the exact DFT functional

  14. Quantitative reactive modeling and verification.

    Science.gov (United States)

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  15. DFT molecular modeling and NMR conformational analysis of a new longipinenetriolone diester

    Science.gov (United States)

    Cerda-García-Rojas, Carlos M.; Guerra-Ramírez, Diana; Román-Marín, Luisa U.; Hernández-Hernández, Juan D.; Joseph-Nathan, Pedro

    2006-05-01

    The structure and conformational behavior of the new natural compound (4 R,5 S,7 S,8 R,9 S,10 R,11 R)-longipin-2-en-7,8,9-triol-1-one 7-angelate-9-isovalerate (1) isolated from Stevia eupatoria, were studied by molecular modeling and NMR spectroscopy. A Monte Carlo search followed by DFT calculations at the B3LYP/6-31G* level provided the theoretical conformations of the sesquiterpene framework, which were in full agreement with results derived from the 1H- 1H coupling constant analysis.

  16. Operations management research methodologies using quantitative modeling

    NARCIS (Netherlands)

    Bertrand, J.W.M.; Fransoo, J.C.

    2002-01-01

    Gives an overview of quantitative model-based research in operations management, focusing on research methodology. Distinguishes between empirical and axiomatic research, and furthermore between descriptive and normative research. Presents guidelines for doing quantitative model-based research in

  17. De(side chain) model of epothilone: bioconformer interconversions DFT study.

    Science.gov (United States)

    Rusinska-Roszak, Danuta; Lozynski, Marek

    2009-07-01

    Using ab initio methods, we have studied conformations of the de(sidechain)de(dioxy)difluoroepothilone model to quantify the effect of stability change between the exo and endo conformers of the epoxy ring. The DFT minimization of the macrolactone ring reveals four low energy conformers, although MP2 predicted five stable structures. The model tested with DFT hybride functional (B3LYP/6-31+G(d,p)) exhibits the global minimum for one of the exo forms (C), experimentally observed in the solid state, but unexpectedly with the MP2 electron correlation method for the virtual endo form (W). Using the QST3 technique, several pathways were found for the conversion of the low energy conformers to the other low energy exo representatives, as well as within the endo analog subset. The potential energy relationships obtained for several exo forms suggest a high conformational mobility between three, experimentally observed, conformers. The high rotational barrier, however, excludes direct equilibrium with experimental EC-derived endo form S. The highest calculated transition state for the conversion of the most stable exo M interligand to the endo S form is approximately a 28 kcal/mol above the energy of the former. The two-step interconversion of the exo H conformer to the endo S requires at least 28 kcal/mol. Surprisingly, we found that the transition state energy of the H form to the virtual endo W has the acceptable value of about 9 kcal/mol and the next energy barrier for free interconversion of endo W to endo S is 13 kcal/mol.

  18. A real-time fluorescent sensor specific to Mg2+: crystallographic evidence, DFT calculation and its use for quantitative determination of magnesium in drinking water.

    Science.gov (United States)

    Men, Guangwen; Chen, Chunrong; Zhang, Shitong; Liang, Chunshuang; Wang, Ying; Deng, Mengyu; Shang, Hongxing; Yang, Bing; Jiang, Shimei

    2015-02-14

    An "off-the-shelf" fluorescence "turn-on" Mg(2+) chemosensor 3,5-dichlorosalicylaldehyde (BCSA) was rationally designed and developed. This proposed sensor works based on Mg(2+)-induced formation of the 2 : 1 BCSA-Mg(2+) complex. The coordination of BSCA to Mg(2+) increases its structural rigidity generating a chelation-enhanced fluorescence (CHEF) effect which was confirmed by single crystal XRD studies of the BSCA-Mg(2+) complex and TD/DFT calculations. This sensor exhibits high sensitivity and selectivity for the quantitative monitoring of Mg(2+) with a wide detection range (0-40 μM), a low detection limit (2.89 × 10(-7) mol L(-1)) and a short response time (sensor can be utilized to monitor Mg(2+) in real time within actual samples from drinking water.

  19. DFT Modeling of Cross-Linked Polyethylene: Role of Gold Atoms and Dispersion Interactions.

    Science.gov (United States)

    Blaško, Martin; Mach, Pavel; Antušek, Andrej; Urban, Miroslav

    2018-02-08

    Using DFT modeling, we analyze the concerted action of gold atoms and dispersion interactions in cross-linked polyethylene. Our model consists of two oligomer chains (PEn) with 7, 11, 15, 19, or 23 carbon atoms in each oligomer cross-linked with one to three Au atoms through C-Au-C bonds. In structures with a single gold atom the C-Au-C bond is located in the central position of the oligomer. Binding energies (BEs) with respect to two oligomer radical fragments and Au are as high as 362-489 kJ/mol depending on the length of the oligomer chain. When the dispersion contribution in PEn-Au-PEn oligomers is omitted, BE is almost independent of the number of carbon atoms, lying between 293 and 296 kJ/mol. The dispersion energy contributions to BEs in PEn-Au-PEn rise nearly linearly with the number of carbon atoms in the PEn chain. The carbon-carbon distance in the C-Au-C moiety is around 4.1 Å, similar to the bond distance between saturated closed shell chains in the polyethylene crystal. BEs of pure saturated closed shell PEn-PEn oligomers are 51-187 kJ/mol. Both Au atoms and dispersion interactions contribute considerably to the creation of nearly parallel chains of oligomers with reasonably high binding energies.

  20. Water adsorption on a copper formate paddlewheel model of CuBTC: A comparative MP2 and DFT study

    Science.gov (United States)

    Toda, Jordi; Fischer, Michael; Jorge, Miguel; Gomes, José R. B.

    2013-11-01

    Simultaneous adsorption of two water molecules on open metal sites of the HKUST-1 metal-organic framework (MOF), modeled with a Cu2(HCOO)4 cluster, was studied by means of density functional theory (DFT) and second-order Moller-Plesset (MP2) approaches together with correlation consistent basis sets. Experimental geometries and MP2 energetic data extrapolated to the complete basis set limit were used as benchmarks for testing the accuracy of several different exchange-correlation functionals in the correct description of the water-MOF interaction. M06-L and some LC-DFT methods arise as the most appropriate in terms of the quality of geometrical data, energetic data and computational resources needed.

  1. A DFT study on benzene adsorption over tungsten sulfides: surface model and adsorption geometries

    NARCIS (Netherlands)

    Koide, R.; Hensen, E.J.M.; Paul, J.F.; Cristol, S.; Payen, E.; Nakamura, H.; Santen, van R.A.

    2007-01-01

    Benzene adsorption on a WS2(100) surface was studied by ab initio periodic DFT computations. Benzene adsorption is facile on the bridge site of the bare W edge via ¿2 or ¿3 coordination. Taking into account the stable configuration at the W edge under typical hydrotreating reaction conditions (623

  2. The qualitative and quantitative accuracy of DFT methods in computing 1J(C–F), 1J(C–N) and nJ(F–F) spin–spin coupling of fluorobenzene and fluoropyridine molecules

    International Nuclear Information System (INIS)

    Adeniyi, Adebayo A.; Ajibade, Peter A.

    2015-01-01

    The qualitative and quantitative quality of DFT methods combined with different basis sets in computing the J-coupling of the types 1 J(C–F) and n J(F–F) are investigated for the fluorobenzene and fluoropyridine derivatives. Interestingly, all of the computational methods perfectly reproduced the experimental order for n J(F–F) but many failed to reproduce the experimental order for 1 J(C–F) coupling. The functional PBEPBE gives the best quantitative values that are closer to the experimental spin–spin coupling when combined with the basis sets aug-cc-pVDZ and DGDZVP but is also part of the methods that fail to perfectly reproduce the experimental order for the 1 J(C–F) coupling. The basis set DGDZVP combined with all the methods except with PBEPBE perfectly reproduces the 1 J(C–F) experimental order. All the methods reproduce either the positive or the negative sign of the experimental spin–spin coupling except for the basis set 6-31+G(d,p) which fails to reproduce the experimental positive value of 3 J(F–F) regardless of what type of DFT methods was used. The values of the FC term is far higher than all other Ramsey terms in the one bond 1 J(C–F) coupling but in the two, three and four bonds n J(F–F) the values of PSO and SD are higher. - Graphical abstract: DFT methods were used to compute the J-coupling of molecules benf, benf2, benf2c, benf2c2, pyrf, pyrfc and pyrfc2, and are presented. Right combination of DFT functional with basis set can reproduce high level EOM-CCSD and experimental J-coupling results. All the methods can reproduce the qualitative order of the experimental J-coupling but not all reproduce the quantitative. The best quantitative results were obtained from PBEPBE combined with the high basis set aug-cc-pVDZ Also, PBEPBE combines with lower basis set DGDZVP to give a highly similar value. - Highlights: • DFT methods were used to compute the J-coupling of the molecules. • Right combination of DFT functional with basis

  3. Quantitative structure - mesothelioma potency model ...

    Science.gov (United States)

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  4. Tuning of nanodiamond particles’ optical properties by structural defects and surface modifications: DFT modelling

    Czech Academy of Sciences Publication Activity Database

    Kratochvílová, Irena; Kovalenko, Alexander; Fendrych, František; Petráková, Vladimíra; Záliš, Stanislav; Nesladek, M.

    2011-01-01

    Roč. 21, č. 45 (2011), s. 18248-18255 ISSN 0959-9428 R&D Projects: GA ČR(CZ) GAP304/10/1951; GA AV ČR KAN200100801 Institutional research plan: CEZ:AV0Z10100520; CEZ:AV0Z40400503 Keywords : nanodiamond * luminiscence * DFT Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 5.968, year: 2011 http://pubs.rsc.org/en/content/articlelanding/2011/JM/C1JM13525B

  5. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  6. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  7. Dry (CO_2) reforming of methane over Pt catalysts studied by DFT and kinetic modeling

    International Nuclear Information System (INIS)

    Niu, Juntian; Du, Xuesen; Ran, Jingyu; Wang, Ruirui

    2016-01-01

    Graphical abstract: - Highlights: • CH appears to be the most abundant species on Pt(1 1 1) surface in CH_4 dissociation. • CO_2* + H* → COOH* + * → CO* + OH* is the dominant reaction pathway in CO_2 activation. • Major reaction pathway in CH oxidation: CH* + OH* → CHOH* + * → CHO* + H* → CO* + 2H*. • C* + OH* → COH* + * → CO* + H* is the dominant reaction pathway in C oxidation. - Abstract: Dry reforming of methane (DRM) is a well-studied reaction that is of both scientific and industrial importance. In order to design catalysts that minimize the deactivation and improve the selectivity and activity for a high H_2/CO yield, it is necessary to understand the elementary reaction steps involved in activation and conversion of CO_2 and CH_4. In our present work, a microkinetic model based on density functional theory (DFT) calculations is applied to explore the reaction mechanism for methane dry reforming on Pt catalysts. The adsorption energies of the reactants, intermediates and products, and the activation barriers for the elementary reactions involved in the DRM process are calculated over the Pt(1 1 1) surface. In the process of CH_4 direct dissociation, the kinetic results show that CH dissociative adsorption on Pt(1 1 1) surface is the rate-determining step. CH appears to be the most abundant species on the Pt(1 1 1) surface, suggesting that carbon deposition is not easy to form in CH_4 dehydrogenation on Pt(1 1 1) surface. In the process of CO_2 activation, three possible reaction pathways are considered to contribute to the CO_2 decomposition: (I) CO_2* + * → CO* + O*; (II) CO_2* + H* → COOH* + * → CO* + OH*; (III) CO_2* + H* → mono-HCOO* + * → bi-HCOO* + * [CO_2* + H* → bi-HCOO* + *] → CHO* + O*. Path I requires process to overcome the activation barrier of 1.809 eV and the forward reaction is calculated to be strongly endothermic by 1.430 eV. In addition, the kinetic results also indicate this process is not easy to

  8. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  9. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...

  10. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  11. Surface structure and electronic states of epitaxial β-FeSi.sub.2./sub.(100)/Si(001) thin films: Combined quantitative LEED, ab initio DFT, and STM study

    Czech Academy of Sciences Publication Activity Database

    Romanyuk, Olexandr; Hattori, K.; Someta, M.; Daimon, H.

    2014-01-01

    Roč. 90, č. 15 (2014), "155305-1"-"155305-9" ISSN 1098-0121 Grant - others:AVČR(CZ) M100101201; Murata Science Foundation(JP) Project n. 00295 Institutional support: RVO:68378271 Keywords : iron silicide * LEED I-V * DFT * STM * surface reconstruction * surface states Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.736, year: 2014

  12. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    and directions of this research area, this paper provides a content analysis of 134 carefully identified papers on quantitative, formal models that address sustainability aspects in the forward SC. It was found that a preponderance of the publications and models appeared in a limited set of six journals......Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  13. DFT modeling of the electronic and magnetic structures and chemical bonding properties of intermetallic hydrides

    International Nuclear Information System (INIS)

    Al Alam, A.F.

    2009-06-01

    This thesis presents an ab initio study of several classes of intermetallics and their hydrides. These compounds are interesting from both a fundamental and an applied points of view. To achieve this aim two complementary methods, constructed within the DFT, were chosen: (i) pseudo potential based VASP for geometry optimization, structural investigations and electron localization mapping (ELF), and (ii) all-electrons ASW method for a detailed description of the electronic structure, chemical bonding properties following different schemes as well as quantities depending on core electrons such as the hyperfine field. A special interest is given with respect to the interplay between magneto-volume and chemical interactions (metal-H) effects within the following hydrided systems: binary Laves (e.g. ScFe 2 ) and Haucke (e.g. LaNi 5 ) phases on one hand, and ternary cerium based (e.g. CeRhSn) and uranium based (e.g. U 2 Ni 2 Sn) alloys on the other hand. (author)

  14. DFT +U Modeling of Hole Polarons in Organic Lead Halide Perovskites

    Science.gov (United States)

    Welch, Eric; Erhart, Paul; Scolfaro, Luisa; Zakhidov, Alex

    Due to the ever present drive towards improved efficiencies in solar cell technology, new and improved materials are emerging rapidly. Organic halide perovskites are a promising prospect, yet a fundamental understanding of the organic perovskite structure and electronic properties is missing. Particularly, explanations of certain physical phenomena, specifically a low recombination rate and high mobility of charge carriers still remain controversial. We theoretically investigate possible formation of hole polarons adopting methodology used for oxide perovskites. The perovskite studied here is the ABX3structure, with A being an organic cation, B lead and C a halogen; the combinations studied allow for A1,xA2 , 1 - xBX1,xX2 , 3 - xwhere the alloy convention is used to show mixtures of the organic cations and/or the halogens. Two organic cations, methylammonium and formamidinium, and three halogens, iodine, chlorine and bromine are studied. Electronic structures and polaron behavior is studied through first principle density functional theory (DFT) calculations using the Vienna Ab Initio Simulation Package (VASP). Local density approximation (LDA) pseudopotentials are used and a +U Hubbard correction of 8 eV is added; this method was shown to work with oxide perovskites. It is shown that a localized state is realized with the Hubbard correction in systems with an electron removed, residing in the band gap of each different structure. Thus, hole polarons are expected to be seen in these perovskites.

  15. Thz Spectroscopy and DFT Modeling of Intermolecular Vibrations in Hydrophobic Amino Acids

    Science.gov (United States)

    Williams, michael R. C.; Aschaffenburg, Daniel J.; Schmuttenmaer, Charles A.

    2013-06-01

    Vibrations that involve intermolecular displacements occur in molecular crystals at frequencies in the 0.5-5 THz range (˜15-165 cm^{-1}), and these motions are direct indicators of the interaction potential between the molecules. The intermolecular potential energy surface of crystalline hydrophobic amino acids is inherently interesting simply because of the wide variety of forces (electrostatic, dipole-dipole, hydrogen-bonding, van der Waals) that are present. Furthermore, an understanding of these particular interactions is immediately relevant to important topics like protein conformation and pharmaceutical polymorphism. We measured the low-frequency absorption spectra of several polycrystalline hydrophobic amino acids using THz time-domain spectroscopy, and in addition we carried out DFT calculations using periodic boundary conditions and an exchange-correlation functional that accounts for van der Waals dispersion forces. We chose to investigate a series of similar amino acids with closely analogous unit cells (leucine, isoleucine, and allo-isoleucine, in racemic or pseudo-racemic mixtures). This allows us to consider trends in the vibrational spectra as a function of small changes in molecular arrangement and/or crystal geometry. In this way, we gain confidence that peak assignments are not based on serendipitous similarities between calculated and observed features.

  16. Expert judgement models in quantitative risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rosqvist, T. [VTT Automation, Helsinki (Finland); Tuominen, R. [VTT Automation, Tampere (Finland)

    1999-12-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed.

  17. Expert judgement models in quantitative risk assessment

    International Nuclear Information System (INIS)

    Rosqvist, T.; Tuominen, R.

    1999-01-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed

  18. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  19. CO2 adsorption-assisted CH4 desorption on carbon models of coal surface: A DFT study

    Science.gov (United States)

    Xu, He; Chu, Wei; Huang, Xia; Sun, Wenjing; Jiang, Chengfa; Liu, Zhongqing

    2016-07-01

    Injection of CO2 into coal is known to improve the yields of coal-bed methane gas. However, the technology of CO2 injection-enhanced coal-bed methane (CO2-ECBM) recovery is still in its infancy with an unclear mechanism. Density functional theory (DFT) calculations were performed to elucidate the mechanism of CO2 adsorption-assisted CH4 desorption (AAD). To simulate coal surfaces, different six-ring aromatic clusters (2 × 2, 3 × 3, 4 × 4, 5 × 5, 6 × 6, and 7 × 7) were used as simplified graphene (Gr) carbon models. The adsorption and desorption of CH4 and/or CO2 on these carbon models were assessed. The results showed that a six-ring aromatic cluster model (4 × 4) can simulate the coal surface with limited approximation. The adsorption of CO2 onto these carbon models was more stable than that in the case of CH4. Further, the adsorption energies of single CH4 and CO2 in the more stable site were -15.58 and -18.16 kJ/mol, respectively. When two molecules (CO2 and CH4) interact with the surface, CO2 compels CH4 to adsorb onto the less stable site, with a resulting significant decrease in the adsorption energy of CH4 onto the surface of the carbon model with pre-adsorbed CO2. The Mulliken charges and electrostatic potentials of CH4 and CO2 adsorbed onto the surface of the carbon model were compared to determine their respective adsorption activities and changes. At the molecular level, our results showed that the adsorption of the injected CO2 promoted the desorption of CH4, the underlying mechanism of CO2-ECBM.

  20. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  1. Noble gas encapsulation into carbon nanotubes: Predictions from analytical model and DFT studies

    Energy Technology Data Exchange (ETDEWEB)

    Balasubramani, Sree Ganesh; Singh, Devendra; Swathi, R. S., E-mail: swathi@iisertvm.ac.in [School of Chemistry, Indian Institute of Science Education and Research Thiruvananthapuram (IISER-TVM), Kerala 695016 (India)

    2014-11-14

    The energetics for the interaction of the noble gas atoms with the carbon nanotubes (CNTs) are investigated using an analytical model and density functional theory calculations. Encapsulation of the noble gas atoms, He, Ne, Ar, Kr, and Xe into CNTs of various chiralities is studied in detail using an analytical model, developed earlier by Hill and co-workers. The constrained motion of the noble gas atoms along the axes of the CNTs as well as the off-axis motion are discussed. Analyses of the forces, interaction energies, acceptance and suction energies for the encapsulation enable us to predict the optimal CNTs that can encapsulate each of the noble gas atoms. We find that CNTs of radii 2.98 − 4.20 Å (chiral indices, (5,4), (6,4), (9,1), (6,6), and (9,3)) can efficiently encapsulate the He, Ne, Ar, Kr, and Xe atoms, respectively. Endohedral adsorption of all the noble gas atoms is preferred over exohedral adsorption on various CNTs. The results obtained using the analytical model are subsequently compared with the calculations performed with the dispersion-including density functional theory at the M06 − 2X level using a triple-zeta basis set and good qualitative agreement is found. The analytical model is however found to be computationally cheap as the equations can be numerically programmed and the results obtained in comparatively very less time.

  2. Quantitative histological models suggest endothermy in plesiosaurs

    Directory of Open Access Journals (Sweden)

    Corinna V. Fleischle

    2018-06-01

    Full Text Available Background Plesiosaurs are marine reptiles that arose in the Late Triassic and survived to the Late Cretaceous. They have a unique and uniform bauplan and are known for their very long neck and hydrofoil-like flippers. Plesiosaurs are among the most successful vertebrate clades in Earth’s history. Based on bone mass decrease and cosmopolitan distribution, both of which affect lifestyle, indications of parental care, and oxygen isotope analyses, evidence for endothermy in plesiosaurs has accumulated. Recent bone histological investigations also provide evidence of fast growth and elevated metabolic rates. However, quantitative estimations of metabolic rates and bone growth rates in plesiosaurs have not been attempted before. Methods Phylogenetic eigenvector maps is a method for estimating trait values from a predictor variable while taking into account phylogenetic relationships. As predictor variable, this study employs vascular density, measured in bone histological sections of fossil eosauropterygians and extant comparative taxa. We quantified vascular density as primary osteon density, thus, the proportion of vascular area (including lamellar infillings of primary osteons to total bone area. Our response variables are bone growth rate (expressed as local bone apposition rate and resting metabolic rate (RMR. Results Our models reveal bone growth rates and RMRs for plesiosaurs that are in the range of birds, suggesting that plesiosaurs were endotherm. Even for basal eosauropterygians we estimate values in the range of mammals or higher. Discussion Our models are influenced by the availability of comparative data, which are lacking for large marine amniotes, potentially skewing our results. However, our statistically robust inference of fast growth and fast metabolism is in accordance with other evidence for plesiosaurian endothermy. Endothermy may explain the success of plesiosaurs consisting in their survival of the end-Triassic extinction

  3. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  4. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  5. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  6. A new DFT approach to model small polarons in oxides with proper account for long-range polarization

    Science.gov (United States)

    Kokott, Sebastian; Levchenko, Sergey V.; Scheffler, Matthias; Theory Department Team

    In this work, we address two important challenges in the DFT description of small polarons (excess charges localized within one unit cell): sensitivity to the errors in exchange-correlation (XC) treatment and finite-size effects in supercell calculations. The polaron properties are obtained using a modified neutral potential-energy surface (PES). Using the hybrid HSE functional and considering the whole range 0 Deutsche Forschungsgemeinschaft).

  7. Modeling of charge-transfer transitions and excited states in d6 transition metal complexes by DFT techniques

    Czech Academy of Sciences Publication Activity Database

    Vlček, Antonín; Záliš, Stanislav

    2007-01-01

    Roč. 251, 3-4 (2007), s. 258-287 ISSN 0010-8545 R&D Projects: GA MŠk 1P05OC068; GA MŠk OC 139 Institutional research plan: CEZ:AV0Z40400503 Keywords : charge-transfer transition * DFT technique * excited states * spectroscopy Subject RIV: CG - Electrochemistry Impact factor: 8.568, year: 2007

  8. Dry (CO{sub 2}) reforming of methane over Pt catalysts studied by DFT and kinetic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Niu, Juntian [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China); Du, Xuesen, E-mail: xuesendu@cqu.edu.cn [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China); Ran, Jingyu, E-mail: jyran@189.cn [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China); Wang, Ruirui [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China)

    2016-07-15

    Graphical abstract: - Highlights: • CH appears to be the most abundant species on Pt(1 1 1) surface in CH{sub 4} dissociation. • CO{sub 2}* + H* → COOH* + * → CO* + OH* is the dominant reaction pathway in CO{sub 2} activation. • Major reaction pathway in CH oxidation: CH* + OH* → CHOH* + * → CHO* + H* → CO* + 2H*. • C* + OH* → COH* + * → CO* + H* is the dominant reaction pathway in C oxidation. - Abstract: Dry reforming of methane (DRM) is a well-studied reaction that is of both scientific and industrial importance. In order to design catalysts that minimize the deactivation and improve the selectivity and activity for a high H{sub 2}/CO yield, it is necessary to understand the elementary reaction steps involved in activation and conversion of CO{sub 2} and CH{sub 4}. In our present work, a microkinetic model based on density functional theory (DFT) calculations is applied to explore the reaction mechanism for methane dry reforming on Pt catalysts. The adsorption energies of the reactants, intermediates and products, and the activation barriers for the elementary reactions involved in the DRM process are calculated over the Pt(1 1 1) surface. In the process of CH{sub 4} direct dissociation, the kinetic results show that CH dissociative adsorption on Pt(1 1 1) surface is the rate-determining step. CH appears to be the most abundant species on the Pt(1 1 1) surface, suggesting that carbon deposition is not easy to form in CH{sub 4} dehydrogenation on Pt(1 1 1) surface. In the process of CO{sub 2} activation, three possible reaction pathways are considered to contribute to the CO{sub 2} decomposition: (I) CO{sub 2}* + * → CO* + O*; (II) CO{sub 2}* + H* → COOH* + * → CO* + OH*; (III) CO{sub 2}* + H* → mono-HCOO* + * → bi-HCOO* + * [CO{sub 2}* + H* → bi-HCOO* + *] → CHO* + O*. Path I requires process to overcome the activation barrier of 1.809 eV and the forward reaction is calculated to be strongly endothermic by 1.430 eV. In

  9. Conformational isomers of dichloro bis(1,3-diaminopropane copper(II: Synthesis, characterization and DFT modeling

    Directory of Open Access Journals (Sweden)

    Seema Yadav

    2017-02-01

    Full Text Available Three isomers of [Cu(pn2Cl2] in solid state have been synthesized, isolated and characterized by elemental analysis, molar conductance, FTIR, TGA, EPR, electronic spectra and DFT calculation. The molar conductance of 1 mM solution of the complexes measured in DMSO falls in 40–44 Scm2 mol−1. All the isomers in aqueous medium show similar absorption pattern in the UV–visible region of the spectra. They are nearly identical in solution although in the solid state they exist in three distinct colors. The change in the color of complexes is due to change in conformations of the propanediamine molecule. Cu(II in [Cu(pn2Cl2] lies on an inversion center. It is octahedrally coordinated to four nitrogen of 1,3-diaminopropane (pn and two chloride ions displaying three different spatial conformations namely chair–chair, chair–boat and boat–boat. The TGA of the complexes suggest that Cu is left as final residue at 600 °C. The entire data have been supported by DFT calculation.

  10. Human eyeball model reconstruction and quantitative analysis.

    Science.gov (United States)

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  11. Modeling with Young Students--Quantitative and Qualitative.

    Science.gov (United States)

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  12. Quantitative occupational risk model: Single hazard

    International Nuclear Information System (INIS)

    Papazoglou, I.A.; Aneziris, O.N.; Bellamy, L.J.; Ale, B.J.M.; Oh, J.

    2017-01-01

    A model for the quantification of occupational risk of a worker exposed to a single hazard is presented. The model connects the working conditions and worker behaviour to the probability of an accident resulting into one of three types of consequence: recoverable injury, permanent injury and death. Working conditions and safety barriers in place to reduce the likelihood of an accident are included. Logical connections are modelled through an influence diagram. Quantification of the model is based on two sources of information: a) number of accidents observed over a period of time and b) assessment of exposure data of activities and working conditions over the same period of time and the same working population. Effectiveness of risk reducing measures affecting the working conditions, worker behaviour and/or safety barriers can be quantified through the effect of these measures on occupational risk. - Highlights: • Quantification of occupational risk from a single hazard. • Influence diagram connects working conditions, worker behaviour and safety barriers. • Necessary data include the number of accidents and the total exposure of worker • Effectiveness of risk reducing measures is quantified through the impact on the risk • An example illustrates the methodology.

  13. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  14. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  15. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  16. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  17. Methodologies for quantitative systems pharmacology (QSP) models : Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, Hp; Agoram, B.; Davies, M.R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, Ph.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  18. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  19. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  20. DFT investigations of phosphotriesters hydrolysis in aqueous solution: a model for DNA single strand scission induced by N-nitrosoureas.

    Science.gov (United States)

    Liu, Tingting; Zhao, Lijiao; Zhong, Rugang

    2013-02-01

    DNA phosphotriester adducts are common alkylation products of DNA phosphodiester moiety induced by N-nitrosoureas. The 2-hydroxyethyl phosphotriester was reported to hydrolyze more rapidly than other alkyl phosphotriesters both in neutral and in alkaline conditions, which can cause DNA single strand scission. In this work, DFT calculations have been employed to map out the four lowest activation free-energy profiles for neutral and alkaline hydrolysis of triethyl phosphate (TEP) and diethyl 2-hydroxyethyl phosphate (DEHEP). All the hydrolysis pathways were illuminated to be stepwise involving an acyclic or cyclic phosphorane intermediate for TEP or DEHEP, respectively. The rate-limiting step for all the hydrolysis reactions was found to be the formation of phosphorane intermediate, with the exception of DEHEP hydrolysis in alkaline conditions that the decomposition process turned out to be the rate-limiting step, owing to the extraordinary low formation barrier of cyclic phosphorane intermediate catalyzed by hydroxide. The rate-limiting barriers obtained for the four reactions are all consistent with the available experimental information concerning the corresponding hydrolysis reactions of phosphotriesters. Our calculations performed on the phosphate triesters hydrolysis predict that the lower formation barriers of cyclic phosphorane intermediates compared to its acyclic counter-part should be the dominant factor governing the hydrolysis rate enhancement of DEHEP relative to TEP both in neutral and in alkaline conditions.

  1. Evaluation of recent quantitative magnetospheric magnetic field models

    International Nuclear Information System (INIS)

    Walker, R.J.

    1976-01-01

    Recent quantitative magnetospheric field models contain many features not found in earlier models. Magnetopause models which include the effects of the dipole tilt were presented. More realistic models of the tail field include tail currents which close on the magnetopause, cross-tail currents of finite thickness, and cross-tail current models which model the position of the neutral sheet as a function of tilt. Finally, models have attempted to calculate the field of currents distributed in the inner magnetosphere. As the purpose of a magnetospheric model is to provide a mathematical description of the field that reasonably reproduces the observed magnetospheric field, several recent models were compared with the observed ΔB(B/sub observed/--B/sub main field/) contours. Models containing only contributions from magnetopause and tail current systems are able to reproduce the observed quiet time field only in an extremely qualitative way. The best quantitative agreement between models and observations occurs when currents distributed in the inner magnetosphere are added to the magnetopause and tail current systems. However, the distributed current models are valid only for zero tilt. Even the models which reproduce the average observed field reasonably well may not give physically reasonable field gradients. Three of the models evaluated contain regions in the near tail in which the field gradient reverses direction. One region in which all the models fall short is that around the polar cusp, though most can be used to calculate the position of the last closed field line reasonably well

  2. Performance Theories for Sentence Coding: Some Quantitative Models

    Science.gov (United States)

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  3. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage

  4. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  5. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  6. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  7. A Review on Quantitative Models for Sustainable Food Logistics Management

    Directory of Open Access Journals (Sweden)

    M. Soysal

    2012-12-01

    Full Text Available The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM. A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics.

  8. Genomic value prediction for quantitative traits under the epistatic model

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-01-01

    Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.

  9. Wires in the soup: quantitative models of cell signaling

    Science.gov (United States)

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  10. Quantitative analysis of a wind energy conversion model

    International Nuclear Information System (INIS)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-01-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s −1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is c p = 0.15. The v 3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively. (paper)

  11. Quantitative Systems Pharmacology: A Case for Disease Models.

    Science.gov (United States)

    Musante, C J; Ramanujan, S; Schmidt, B J; Ghobrial, O G; Lu, J; Heatherington, A C

    2017-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model-informed drug discovery and development, supporting program decisions from exploratory research through late-stage clinical trials. In this commentary, we discuss the unique value of disease-scale "platform" QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.

  12. Quantitative model of New Zealand's energy supply industry

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B. R. [Victoria Univ., Wellington, (New Zealand); Lucas, P. D. [Ministry of Energy Resources (New Zealand)

    1977-10-15

    A mathematical model is presented to assist in an analysis of energy policy options available. The model is based on an engineering orientated description of New Zealand's energy supply and distribution system. The system is cast as a linear program, in which energy demand is satisfied at least cost. The capacities and operating modes of process plant (such as power stations, oil refinery units, and LP-gas extraction plants) are determined by the model, as well as the optimal mix of fuels supplied to the final consumers. Policy analysis with the model enables a wide ranging assessment of the alternatives and uncertainties within a consistent quantitative framework. It is intended that the model be used as a tool to investigate the relative effects of various policy options, rather than to present a definitive plan for satisfying the nation's energy requirements.

  13. Generalized gravity from modified DFT

    International Nuclear Information System (INIS)

    Sakatani, Yuho; Uehara, Shozo; Yoshida, Kentaroh

    2017-01-01

    Recently, generalized equations of type IIB supergravity have been derived from the requirement of classical kappa-symmetry of type IIB superstring theory in the Green-Schwarz formulation. These equations are covariant under generalized T-duality transformations and hence one may expect a formulation similar to double field theory (DFT). In this paper, we consider a modification of the DFT equations of motion by relaxing a condition for the generalized covariant derivative with an extra generalized vector. In this modified double field theory (mDFT), we show that the flatness condition of the modified generalized Ricci tensor leads to the NS-NS part of the generalized equations of type IIB supergravity. In particular, the extra vector fields appearing in the generalized equations correspond to the extra generalized vector in mDFT. We also discuss duality symmetries and a modification of the string charge in mDFT.

  14. Generalized gravity from modified DFT

    Energy Technology Data Exchange (ETDEWEB)

    Sakatani, Yuho [Department of Physics, Kyoto Prefectural University of Medicine,Kyoto 606-0823 (Japan); Fields, Gravity and Strings, CTPU,Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Uehara, Shozo [Department of Physics, Kyoto Prefectural University of Medicine,Kyoto 606-0823 (Japan); Yoshida, Kentaroh [Department of Physics, Kyoto University,Kitashirakawa Oiwake-cho, Kyoto 606-8502 (Japan)

    2017-04-20

    Recently, generalized equations of type IIB supergravity have been derived from the requirement of classical kappa-symmetry of type IIB superstring theory in the Green-Schwarz formulation. These equations are covariant under generalized T-duality transformations and hence one may expect a formulation similar to double field theory (DFT). In this paper, we consider a modification of the DFT equations of motion by relaxing a condition for the generalized covariant derivative with an extra generalized vector. In this modified double field theory (mDFT), we show that the flatness condition of the modified generalized Ricci tensor leads to the NS-NS part of the generalized equations of type IIB supergravity. In particular, the extra vector fields appearing in the generalized equations correspond to the extra generalized vector in mDFT. We also discuss duality symmetries and a modification of the string charge in mDFT.

  15. Kaxiras’s Porphyrin: DFT Modeling of Redox-Tuned Optical and Electronic Properties in a Theoretically Designed Catechol-Based Bioinspired Platform

    Directory of Open Access Journals (Sweden)

    Orlando Crescenzi

    2017-11-01

    Full Text Available A detailed computational investigation of the 5,6-dihydroxyindole (DHI-based porphyrin-type tetramer first described by Kaxiras as a theoretical structural model for eumelanin biopolymers is reported herein, with a view to predicting the technological potential of this unique bioinspired tetracatechol system. All possible tautomers/conformers, as well as alternative protonation states, were explored for the species at various degrees of oxidation and all structures were geometry optimized at the density functional theory (DFT level. Comparison of energy levels for each oxidized species indicated a marked instability of most oxidation states except the six-electron level, and an unexpected resilience to disproportionation of the one-electron oxidation free radical species. Changes in the highest energy occupied molecular orbital (HOMO–lowest energy unoccupied molecular orbital (LUMO gaps with oxidation state and tautomerism were determined along with the main electronic transitions: more or less intense absorption in the visible region is predicted for most oxidized species. Data indicated that the peculiar symmetry of the oxygenation pattern pertaining to the four catechol/quinone/quinone methide moieties, in concert with the NH centers, fine-tunes the optical and electronic properties of the porphyrin system. For several oxidation levels, conjugated systems extending over two or more indole units play a major role in determining the preferred tautomeric state: thus, the highest stability of the six-electron oxidation state reflects porphyrin-type aromaticity. These results provide new clues for the design of innovative bioinspired optoelectronic materials.

  16. A DFT-Based Model on the Adsorption Behavior of H2O, H+, Cl−, and OH− on Clean and Cr-Doped Fe(110 Planes

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2018-01-01

    Full Text Available The impact of four typical adsorbates, namely H2O, H+, Cl−, and OH−, on three different planes, namely, Fe(110, Cr(110 and Cr-doped Fe(110, was investigated by using a density functional theory (DFT-based model. It is verified by the adsorption mechanism of the abovementioned four adsorbates that the Cr-doped Fe(110 plane is the most stable facet out of the three. As confirmed by the adsorption energy and electronic structure, Cr doping will greatly enhance the electron donor ability of neighboring Fe atoms, which in turn prompts the adsorption of the positively charged H+. Meanwhile, the affinity of Cr to negatively charged adsorbates (e.g., Cl− and O of H2O, OH− is improved due to the weakening of its electron donor ability. On the other hand, the strong bond between surface atoms and the adsorbates can also weaken the bond between metal atoms, which results in a structure deformation and charge redistribution among the native crystal structure. In this way, the crystal becomes more vulnerable to corrosion.

  17. Síntesis de 3-ciano-4-hidroxicumarina y análisis de su equilibrio tautomérico utilizando la Teoría del Funcional de la Densidad (DFT y el modelo de solvatación continua (PCM | Synthesis of 3-cyanohydroxycoumarin and analysis of its tautomeric equilibrium using Density Functional Theory (DFT and polarizable continuum model (PCM

    Directory of Open Access Journals (Sweden)

    Gustavo Cabrera

    2017-11-01

    Full Text Available The procedure for the synthesis of 3-cyano-4-hydroxycoumarin is presented along with the results from the analysis of its tautomeric equilibrium using Density Functional Theory (DFT and Polarizable Continuum Model (PCM. The geometry of the compounds was optimized with Gaussian 03 and from the resulting structures, a group of thermodynamic and kinetic parameters were determined. It was found that 3-cyano-4-hydroxycoumarin was the most stable tautomer, as was also shown by spectroscopic techniques. Other parameters, such as: transition state energy, equlibrium constant, kinetic constant, bond orders and bond angles, were also calculated.

  18. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  19. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  20. Quantitative aspects and dynamic modelling of glucosinolate metabolism

    DEFF Research Database (Denmark)

    Vik, Daniel

    . This enables comparison of transcript and protein levels across mutants and upon induction. I find that unchallenged plants show good correspondence between protein and transcript, but that treatment with methyljasmonate results in significant differences (chapter 1). Functional genomics are used to study......). The construction a dynamic quantitative model of GLS hydrolysis is described. Simulations reveal potential effects on auxin signalling that could reflect defensive strategies (chapter 4). The results presented grant insights into, not only the dynamics of GLS biosynthesis and hydrolysis, but also the relationship...

  1. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  2. The influence of the dispersion corrections on the performance of DFT method in modeling HNgY noble gas molecules and their complexes

    Science.gov (United States)

    Cukras, Janusz; Sadlej, Joanna

    2018-01-01

    The letter reports a comparative assessment of the usefulness of the two different Grimme's corrections for evaluating dispersion interaction (DFT-D3 and DFT-D3BJ) for the representative molecules of the family of noble-gas hydrides HXeY and their complexes with the HZ molecules, where Y and Z are F/Cl/OH/SH. with special regard to the dispersion term calculated by means of the symmetry-adapted perturbation theory (at the SAPT0 level). The results indicate that despite differences in the total interactions energy (DFT + corrections) versus SAPT0 results, the sequence of contributions of the individual dispersion terms is still maintained. Both dispersion corrections perform similarly and they improve the results suggesting that it is worthwhile to include them in calculations.

  3. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  4. Quantifying Zika: Advancing the Epidemiology of Zika With Quantitative Models.

    Science.gov (United States)

    Keegan, Lindsay T; Lessler, Justin; Johansson, Michael A

    2017-12-16

    When Zika virus (ZIKV) emerged in the Americas, little was known about its biology, pathogenesis, and transmission potential, and the scope of the epidemic was largely hidden, owing to generally mild infections and no established surveillance systems. Surges in congenital defects and Guillain-Barré syndrome alerted the world to the danger of ZIKV. In the context of limited data, quantitative models were critical in reducing uncertainties and guiding the global ZIKV response. Here, we review some of the models used to assess the risk of ZIKV-associated severe outcomes, the potential speed and size of ZIKV epidemics, and the geographic distribution of ZIKV risk. These models provide important insights and highlight significant unresolved questions related to ZIKV and other emerging pathogens. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  5. DFT Study of Azole Corrosion Inhibitors on Cu2O Model of Oxidized Copper Surfaces: II. Lateral Interactions and Thermodynamic Stability

    Directory of Open Access Journals (Sweden)

    Dunja Gustinčič

    2018-05-01

    Full Text Available The adsorption of imidazole, triazole, and tetrazole—used as simple models of azole corrosion inhibitors—on various Cu 2 O(111- and Cu 2 O(110-type surfaces was characterized using density functional theory (DFT calculations with the focus on lateral intermolecular interactions and the thermodynamic stability of various adsorption structures. To this end, an ab initio thermodynamics approach was used to construct two-dimensional phase diagrams for all three molecules. The impact of van der Waals dispersion interactions on molecular adsorption bonding was also addressed. Lateral intermolecular interactions were found to be the most repulsive for imidazole and the least for tetrazole, for which they are usually even slightly attractive. Both non-dissociative and dissociative adsorption modes were considered and although dissociated molecules bind to surfaces more strongly, none of the considered structures that involve dissociated molecules appear on the phase diagrams. Our results show that the three azole molecules display a strong tendency to preferentially adsorb at reactive coordinatively unsaturated (CUS Cu surface sites and stabilize them. According to the calculated phase diagrams for Cu 2 O(111-type surfaces, the three azole molecules adsorb to specific CUS sites, designated as Cu CUS , under all conditions at which molecular adsorption is stable. This tentatively suggests that their corrosion inhibition capability may stem, at least in part, from their ability to passivate reactive surface sites. We further comment on a specific drawback due to neglect of configurational entropy that is usually utilized within the ab initio thermodynamics approach. We analyze the issue for Langmuir and Frumkin adsorption models and show that when configurational entropy is neglected, the ab initio thermodynamics approach is too hasty to predict phase-transition like behavior.

  6. Data characterizing the energetics of enzyme-catalyzed hydrolysis and transglycosylation reactions by DFT cluster model calculations

    Directory of Open Access Journals (Sweden)

    Jitrayut Jitonnom

    2018-04-01

    Full Text Available The data presented in this paper are related to the research article entitled “QM/MM modeling of the hydrolysis and transfructosylation reactions of fructosyltransferase from Aspergillus japonicas, an enzyme that produces prebiotic fructooligosaccharide” (Jitonnom et al., 2018 [1]. This paper presents the procedure and data for characterizing the whole relative energy profiles of hydrolysis and transglycosylation reactions whose elementary steps differ in chemical composition. The data also reflects the choices of the QM cluster model, the functional/basis set method and the equations in determining the reaction energetics.

  7. Evaluating the Performance of DFT Functionals in Assessing the Interaction Energy and Ground-State Charge Transfer of Donor/Acceptor Complexes: Tetrathiafulvalene−Tetracyanoquinodimethane (TTF−TCNQ) as a Model Case

    KAUST Repository

    Sini, Gjergji; Sears, John S.; Brédas, Jean-Luc

    2011-01-01

    We have evaluated the performance of several density functional theory (DFT) functionals for the description of the ground-state electronic structure and charge transfer in donor/acceptor complexes. The tetrathiafulvalene- tetracyanoquinodimethane (TTF-TCNQ) complex has been considered as a model test case. Hybrid functionals have been chosen together with recently proposed long-range corrected functionals (ωB97X, ωB97X-D, LRC-ωPBEh, and LC-ωPBE) in order to assess the sensitivity of the results to the treatment and magnitude of exact exchange. The results show an approximately linear dependence of the ground-state charge transfer with the HOMO TTF-LUMOTCNQ energy gap, which in turn depends linearly on the percentage of exact exchange in the functional. The reliability of ground-state charge transfer values calculated in the framework of a monodeterminantal DFT approach was also examined. © 2011 American Chemical Society.

  8. Evaluating the Performance of DFT Functionals in Assessing the Interaction Energy and Ground-State Charge Transfer of Donor/Acceptor Complexes: Tetrathiafulvalene−Tetracyanoquinodimethane (TTF−TCNQ) as a Model Case

    KAUST Repository

    Sini, Gjergji

    2011-03-08

    We have evaluated the performance of several density functional theory (DFT) functionals for the description of the ground-state electronic structure and charge transfer in donor/acceptor complexes. The tetrathiafulvalene- tetracyanoquinodimethane (TTF-TCNQ) complex has been considered as a model test case. Hybrid functionals have been chosen together with recently proposed long-range corrected functionals (ωB97X, ωB97X-D, LRC-ωPBEh, and LC-ωPBE) in order to assess the sensitivity of the results to the treatment and magnitude of exact exchange. The results show an approximately linear dependence of the ground-state charge transfer with the HOMO TTF-LUMOTCNQ energy gap, which in turn depends linearly on the percentage of exact exchange in the functional. The reliability of ground-state charge transfer values calculated in the framework of a monodeterminantal DFT approach was also examined. © 2011 American Chemical Society.

  9. Tip-Enhanced Raman Voltammetry: Coverage Dependence and Quantitative Modeling.

    Science.gov (United States)

    Mattei, Michael; Kang, Gyeongwon; Goubert, Guillaume; Chulhai, Dhabih V; Schatz, George C; Jensen, Lasse; Van Duyne, Richard P

    2017-01-11

    Electrochemical atomic force microscopy tip-enhanced Raman spectroscopy (EC-AFM-TERS) was employed for the first time to observe nanoscale spatial variations in the formal potential, E 0' , of a surface-bound redox couple. TERS cyclic voltammograms (TERS CVs) of single Nile Blue (NB) molecules were acquired at different locations spaced 5-10 nm apart on an indium tin oxide (ITO) electrode. Analysis of TERS CVs at different coverages was used to verify the observation of single-molecule electrochemistry. The resulting TERS CVs were fit to the Laviron model for surface-bound electroactive species to quantitatively extract the formal potential E 0' at each spatial location. Histograms of single-molecule E 0' at each coverage indicate that the electrochemical behavior of the cationic oxidized species is less sensitive to local environment than the neutral reduced species. This information is not accessible using purely electrochemical methods or ensemble spectroelectrochemical measurements. We anticipate that quantitative modeling and measurement of site-specific electrochemistry with EC-AFM-TERS will have a profound impact on our understanding of the role of nanoscale electrode heterogeneity in applications such as electrocatalysis, biological electron transfer, and energy production and storage.

  10. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  11. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  12. Ion-ion and ion-solvent interactions in lithium imidazolide electrolytes studied by Raman spectroscopy and DFT models.

    Science.gov (United States)

    Scheers, Johan; Niedzicki, Leszek; Zukowska, Grażyna Z; Johansson, Patrik; Wieczorek, Władysław; Jacobsson, Per

    2011-06-21

    Molecular level interactions are of crucial importance for the transport properties and overall performance of ion conducting electrolytes. In this work we explore ion-ion and ion-solvent interactions in liquid and solid polymer electrolytes of lithium 4,5-dicyano-(2-trifluoromethyl)imidazolide (LiTDI)-a promising salt for lithium battery applications-using Raman spectroscopy and density functional theory calculations. High concentrations of ion associates are found in LiTDI:acetonitrile electrolytes, the vibrational signatures of which are transferable to PEO-based LiTDI electrolytes. The origins of the spectroscopic changes are interpreted by comparing experimental spectra with simulated Raman spectra of model structures. Simple ion pair models in vacuum identify the imidazole nitrogen atom of the TDI anion to be the most important coordination site for Li(+), however, including implicit or explicit solvent effects lead to qualitative changes in the coordination geometry and improved correlation of experimental and simulated Raman spectra. To model larger aggregates, solvent effects are found to be crucial, and we finally suggest possible triplet and dimer ionic structures in the investigated electrolytes. In addition, the effects of introducing water into the electrolytes-via a hydrate form of LiTDI-are discussed.

  13. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  14. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  15. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  16. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    2000-07-01

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  17. Antioxidant activity of selenenamide-based mimic as a function of the aromatic thiols nucleophilicity, a DFT-SAPE model.

    Science.gov (United States)

    Kheirabadi, Ramesh; Izadyar, Mohammad

    2018-05-18

    The mechanism of action of the selenenamide 1 as a mimic of the glutathione peroxidase (GPx) was investigated by the density functional theory. The solvent-assisted proton exchange procedure was applied to model the catalytic behavior and antioxidant activity of this mimic. To have an insight into the charge transfer effect, different aromatic thiols, including electron donating substituents on the phenyl ring were considered. The catalytic behavior of the selenenamide was modeled in a four-step mechanism, described by the oxidation of the mimic, the reduction of the obtained product, selenoxide, the reduction of the selenenylsulfide and dehydration of selenenic acid. On the basis of the activation parameters, the final step of the proposed mechanism is the rate determining states of the catalytic cycle. Turnover frequency (TOF) analysis showed that the electron donating groups at the para-position of the phenyl ring of the PhSH do not affect the catalytic activity of the selenenamide in contrast to p-methyl thiophenol which indicates the highest nucleophilicity. The evaluation of the electronic contribution of the various donating groups on the phenyl ring of the aromatic thiols shows that the antioxidant activity of the selenenamide sufficiently increases in the presence of the electron-donating substitutions. Finally, the charge transfer process at the rate-determining state was investigated based on the natural bond orbital analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Macrocyclic ligand decorated ordered mesoporous silica with large-pore and short-channel characteristics for effective separation of lithium isotopes: synthesis, adsorptive behavior study and DFT modeling.

    Science.gov (United States)

    Liu, Yuekun; Liu, Fei; Ye, Gang; Pu, Ning; Wu, Fengcheng; Wang, Zhe; Huo, Xiaomei; Xu, Jian; Chen, Jing

    2016-10-18

    Effective separation of lithium isotopes is of strategic value which attracts growing attention worldwide. This study reports a new class of macrocyclic ligand decorated ordered mesoporous silica (OMS) with large-pore and short-channel characteristics, which holds the potential to effectively separate lithium isotopes in aqueous solutions. Initially, a series of benzo-15-crown-5 (B15C5) derivatives containing different electron-donating or -withdrawing substituents were synthesized. Extractive separation of lithium isotopes in a liquid-liquid system was comparatively studied, highlighting the effect of the substituent, solvent, counter anion and temperature. The optimal NH 2 -B15C5 ligands were then covalently anchored to a short-channel SBA-15 OMS precursor bearing alkyl halides via a post-modification protocol. Adsorptive separation of the lithium isotopes was fully investigated, combined with kinetics and thermodynamics analysis, and simulation by using classic adsorption isotherm models. The NH 2 -B15C5 ligand functionalized OMSs exhibited selectivity to lithium ions against other alkali metal ions including K(i). Additionally, a more efficient separation of lithium isotopes could be obtained at a lower temperature in systems with softer counter anions and solvents with a lower dielectric constant. The highest value separation factor (α = 1.049 ± 0.002) was obtained in CF 3 COOLi aqueous solution at 288.15 K. Moreover, theoretical computation based on the density functional theory (DFT) was performed to elucidate the complexation interactions between the macrocyclic ligands and lithium ions. A suggested mechanism involving an isotopic exchange equilibrium was proposed to describe the lithium isotope separation by the functionalized OMSs.

  19. Design, Synthesis and DFT/DNP Modeling Study of New 2-Amino-5-arylazothiazole Derivatives as Potential Antibacterial Agents

    Directory of Open Access Journals (Sweden)

    Sraa Abu-Melha

    2018-02-01

    Full Text Available A new series of 2-amino-5-arylazothiazole derivatives has been designed and synthesized in 61–78% yields and screened as potential antibacterial drug candidates against the Gram negative bacterium Escherichia coli. The geometry of the title compounds were being studied using the Material Studio package and semi-core pseudopods calculations (dspp were performed with the double numerica basis sets plus polarization functional (DNP to predict the properties of materials using the hybrid FT/B3LYP method. Modeling calculations, especially the (EH-EL difference and the energetic parameters revealed that some of the title compounds may be promising tools for further research work and the activity is structure dependent.

  20. The impact of model peptides on structural and dynamic properties of egg yolk lecithin liposomes - experimental and DFT studies.

    Science.gov (United States)

    Wałęsa, Roksana; Man, Dariusz; Engel, Grzegorz; Siodłak, Dawid; Kupka, Teobald; Ptak, Tomasz; Broda, Małgorzata A

    2015-07-01

    Electron spin resonance (ESR), (1) H-NMR, voltage and resistance experiments were performed to explore structural and dynamic changes of Egg Yolk Lecithin (EYL) bilayer upon addition of model peptides. Two of them are phenylalanine (Phe) derivatives, Ac-Phe-NHMe (1) and Ac-Phe-NMe2 (2), and the third one, Ac-(Z)-ΔPhe-NMe2 (3), is a derivative of (Z)-α,β-dehydrophenylalanine. The ESR results revealed that all compounds reduced the fluidity of liposome's membrane, and the highest activity was observed for compound 2 with N-methylated C-terminal amide bond (Ac-Phe-NMe2 ). This compound, being the most hydrophobic, penetrates easily through biological membranes. This was also observed in voltage and resistance studies. (1) H-NMR studies provided a sound evidence on H-bond interactions between the studied diamides and lecithin polar head. The most significant changes in H-atom chemical shifts and spin-lattice relaxation times T1 were observed for compound 1. Our experimental studies were supported by theoretical calculations. Complexes EYLAc-Phe-NMe2 and EYLAc-(Z)-ΔPhe-NMe2 , stabilized by NH⋅⋅⋅O or/and CH⋅⋅⋅O H-bonds were created and optimized at M06-2X/6-31G(d) level of theory in vacuo and in H2 O environment. According to our molecular-modeling studies, the most probable lecithin site of H-bond interaction with studied diamides is the negatively charged O-atom in phosphate group which acts as H-atom acceptor. Moreover, the highest binding energy to hydrocarbon chains were observed in the case of Ac-Phe-NMe2 (2). Copyright © 2015 Verlag Helvetica Chimica Acta AG, Zürich.

  1. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    Science.gov (United States)

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  2. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    Directory of Open Access Journals (Sweden)

    David Stephens

    Full Text Available There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method.

  3. A quantitative phase field model for hydride precipitation in zirconium alloys: Part I. Development of quantitative free energy functional

    International Nuclear Information System (INIS)

    Shi, San-Qiang; Xiao, Zhihua

    2015-01-01

    A temperature dependent, quantitative free energy functional was developed for the modeling of hydride precipitation in zirconium alloys within a phase field scheme. The model takes into account crystallographic variants of hydrides, interfacial energy between hydride and matrix, interfacial energy between hydrides, elastoplastic hydride precipitation and interaction with externally applied stress. The model is fully quantitative in real time and real length scale, and simulation results were compared with limited experimental data available in the literature with a reasonable agreement. The work calls for experimental and/or theoretical investigations of some of the key material properties that are not yet available in the literature

  4. Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology

    DEFF Research Database (Denmark)

    Schoof, Erwin; Erler, Janine

    understanding of molecular processes which are fundamental to tumorigenesis. In Article 1, we propose a novel framework for how cancer mutations can be studied by taking into account their effect at the protein network level. In Article 2, we demonstrate how global, quantitative data on phosphorylation dynamics...... can be generated using MS, and how this can be modeled using a computational framework for deciphering kinase-substrate dynamics. This framework is described in depth in Article 3, and covers the design of KinomeXplorer, which allows the prediction of kinases responsible for modulating observed...... phosphorylation dynamics in a given biological sample. In Chapter III, we move into Integrative Network Biology, where, by combining two fundamental technologies (MS & NGS), we can obtain more in-depth insights into the links between cellular phenotype and genotype. Article 4 describes the proof...

  5. Quantitative genetic models of sexual selection by male choice.

    Science.gov (United States)

    Nakahashi, Wataru

    2008-09-01

    There are many examples of male mate choice for female traits that tend to be associated with high fertility. I develop quantitative genetic models of a female trait and a male preference to show when such a male preference can evolve. I find that a disagreement between the fertility maximum and the viability maximum of the female trait is necessary for directional male preference (preference for extreme female trait values) to evolve. Moreover, when there is a shortage of available male partners or variance in male nongenetic quality, strong male preference can evolve. Furthermore, I also show that males evolve to exhibit a stronger preference for females that are more feminine (less resemblance to males) than the average female when there is a sexual dimorphism caused by fertility selection which acts only on females.

  6. Quantitative Modelling of Trace Elements in Hard Coal.

    Science.gov (United States)

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  7. Melanoma screening: Informing public health policy with quantitative modelling.

    Directory of Open Access Journals (Sweden)

    Stephen Gilmore

    Full Text Available Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in

  8. Combined spectroscopic, DFT, TD-DFT and MD study of newly synthesized thiourea derivative

    Science.gov (United States)

    Menon, Vidya V.; Sheena Mary, Y.; Shyma Mary, Y.; Panicker, C. Yohannan; Bielenica, Anna; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, Christian

    2018-03-01

    A novel thiourea derivative, 1-(3-bromophenyl)-3-[3-(trifluoromethyl)phenyl]thiourea (ANF-22) is synthesized and characterized by FTIR, FT-Raman and NMR spectroscopy experimentally and theoretically. A detailed conformational analysis of the title molecule has been conducted in order to locate the lowest energy geometry, which was further subjected to the detailed investigation of spectroscopic, reactive, degradation and docking studies by density functional theory (DFT) calculations and molecular dynamics (MD) simulations. Time dependent DFT (TD-DFT) calculations have been used also in order to simulate UV spectra and investigate charge transfer within molecule. Natural bond orbital analysis has been performed analyzing the charge delocalization and using HOMO and LUMO energies the electronic properties are analyzed. Molecular electrostatic potential map is used for the quantitative measurement of active sites in the molecule. In order to determine the locations possibly prone to electrophilic attacks we have calculated average local ionization energies and mapped them to the electron density surface. Further insight into the local reactivity properties have been obtained by calculation of Fukui functions, also mapped to the electron density surface. Possible degradation properties by the autoxidation mechanism have been assessed by calculations of bond dissociation energies for hydrogen abstraction. Atoms of title molecule with significant interactions with water molecules have been determined by calculations of radial distribution functions. The title compound can be a lead compound for developing new analgesic drug.

  9. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting pa...

  10. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  11. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  12. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  13. Simulations of the azimuthal distribution of low-energy H atoms scattered off Ag(1 1 0) at grazing incidence: DFT many-body versus model pair potentials

    CERN Document Server

    Cafarelli, P; Benazeth, C; Nieuwjaer, N; Lorente, N

    2003-01-01

    We compare the azimuthal distribution of H atoms after scattering off Ag(1 1 0) obtained by molecular dynamics with different H-Ag(1 1 0) potential energy surfaces (PES) and experimental results. We use grazing incident H atoms and low energies (up to 4 keV). Density functional theory (DFT) calculations are performed for the static case of an H atom in front of an Ag(1 1 0) surface. The surface is represented by an 8-atom slab, and the H atoms form 1x1 and 2x2 supercells. The generalized gradient approximation is used. Classical trajectories are evaluated on the obtained PES, and the azimuthal distribution of the scattered atoms is calculated. Good agreement with experiment is obtained which gives us some confidence in the correct description of the system at low energies by the static DFT calculations. These results are also compared with pair-potential calculations. The accuracy of trajectories may be important for the correct evaluation of charge transfer, energy loss and straggling during ion-surface coll...

  14. Avoiding fractional electrons in subsystem DFT based ab-initio molecular dynamics yields accurate models for liquid water and solvated OH radical.

    Science.gov (United States)

    Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele

    2016-06-21

    In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange-correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH(•) radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH(•) radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.

  15. Avoiding fractional electrons in subsystem DFT based ab-initio molecular dynamics yields accurate models for liquid water and solvated OH radical

    International Nuclear Information System (INIS)

    Genova, Alessandro; Pavanello, Michele; Ceresoli, Davide

    2016-01-01

    In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange–correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH • radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH • radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.

  16. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    Science.gov (United States)

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Quantitative analysis of prediction models for hot cracking in ...

    Indian Academy of Sciences (India)

    A RodrМguez-Prieto

    2017-11-16

    Nov 16, 2017 ... enhancing safety margins and adding greater precision to quantitative accident prediction [45]. One deterministic methodology is the stringency level (SL) approach, which is recognized as a valuable decision tool in the selection of standardized materials specifications to prevent potential failures [3].

  18. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  19. Externally predictive quantitative modeling of supercooled liquid vapor pressure of polychlorinated-naphthalenes through electron-correlation based quantum-mechanical descriptors.

    Science.gov (United States)

    Vikas; Chayawan

    2014-01-01

    For predicting physico-chemical properties related to environmental fate of molecules, quantitative structure-property relationships (QSPRs) are valuable tools in environmental chemistry. For developing a QSPR, molecular descriptors computed through quantum-mechanical methods are generally employed. The accuracy of a quantum-mechanical method, however, rests on the amount of electron-correlation estimated by the method. In this work, single-descriptor QSPRs for supercooled liquid vapor pressure of chloronaphthalenes and polychlorinated-naphthalenes are developed using molecular descriptors based on the electron-correlation contribution of the quantum-mechanical descriptor. The quantum-mechanical descriptors for which the electron-correlation contribution is analyzed include total-energy, mean polarizability, dipole moment, frontier orbital (HOMO/LUMO) energy, and density-functional theory (DFT) based descriptors, namely, absolute electronegativity, chemical hardness, and electrophilicity index. A total of 40 single-descriptor QSPRs were developed using molecular descriptors computed with advanced semi-empirical (SE) methods, namely, RM1, PM7, and ab intio methods, namely, Hartree-Fock and DFT. The developed QSPRs are validated using state-of-the-art external validation procedures employing an external prediction set. From the comparison of external predictivity of the models, it is observed that the single-descriptor QSPRs developed using total energy and correlation energy are found to be far more robust and predictive than those developed using commonly employed descriptors such as HOMO/LUMO energy and dipole moment. The work proposes that if real external predictivity of a QSPR model is desired to be explored, particularly, in terms of intra-molecular interactions, correlation-energy serves as a more appropriate descriptor than the polarizability. However, for developing QSPRs, computationally inexpensive advanced SE methods such as PM7 can be more reliable than

  20. Predictions of Physicochemical Properties of Ionic Liquids with DFT

    Directory of Open Access Journals (Sweden)

    Karl Karu

    2016-07-01

    Full Text Available Nowadays, density functional theory (DFT-based high-throughput computational approach is becoming more efficient and, thus, attractive for finding advanced materials for electrochemical applications. In this work, we illustrate how theoretical models, computational methods, and informatics techniques can be put together to form a simple DFT-based throughput computational workflow for predicting physicochemical properties of room-temperature ionic liquids. The developed workflow has been used for screening a set of 48 ionic pairs and for analyzing the gathered data. The predicted relative electrochemical stabilities, ionic charges and dynamic properties of the investigated ionic liquids are discussed in the light of their potential practical applications.

  1. DFT/PCM, QTAIM, 1H NMR conformational studies and QSAR modeling of thirty-two anti-Leishmania amazonensis Morita-Baylis-Hillman Adducts

    Science.gov (United States)

    Filho, Edilson B. A.; Moraes, Ingrid A.; Weber, Karen C.; Rocha, Gerd B.; Vasconcellos, Mário L. A. A.

    2012-08-01

    Morita-Baylis-Hillman Adducts (MBHA) has been recently synthesized and bio-evaluated by our research group against Leishmania amazonensis, parasite that causes cutaneous and mucocutaneous leishmaniasis. We present here a theoretical conformational study of thirty-two leismanicidal MBHA by B3LYP/6-31+g(d) calculations with Polarized Continuum Model (PCM) to simulate water influence. Intramolecular Hydrogen Bonds (IHBs) indicated to control the most conformational preferences of MBHA. Quantum Theory Atoms in Molecules (QTAIM) calculations were able to characterize these interactions at Bond Critical Point level. Compounds presenting an unusual seven member IHB between NO2 group and hydroxyl moiety, supported by experimental spectroscopic data, showed a considerable improvement of biological activity (lower IC50 values). These results are in accordance to redox NO2 mechanism of action. Based on structural observations, some molecular descriptors were calculated and submitted to Quantitative Structure-Activity Relationship (QSAR) studies through the PLS Regression Method. These studies provided a model with good validation parameters values (R2 = 0.71, Q2 = 0.61 and Qext2 = 0.92).

  2. DFT reactivity indices in confined many-electron atoms + ∫

    Indian Academy of Sciences (India)

    Unknown

    Functional Theory (DFT) based global descriptors of chemical reactivity for atoms .... interesting due to its utility as a model in the wide variety of applications ... hydrogen atom at Rc = 2⋅0 au is expected to correspond to the energy value of ...

  3. A Review of Quantitative Situation Assessment Models for Nuclear Power Plant Operators

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Situation assessment is the process of developing situation awareness and situation awareness is defined as 'the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.' Situation awareness is an important element influencing human actions because human decision making is based on the result of situation assessment or situation awareness. There are many models for situation awareness and those models can be categorized into qualitative or quantitative. As the effects of some input factors on situation awareness can be investigated through the quantitative models, the quantitative models are more useful for the design of operator interfaces, automation strategies, training program, and so on, than the qualitative models. This study presents the review of two quantitative models of situation assessment (SA) for nuclear power plant operators

  4. Improvement of the ID model for quantitative network data

    DEFF Research Database (Denmark)

    Sørensen, Peter Borgen; Damgaard, Christian Frølund; Dupont, Yoko Luise

    2015-01-01

    Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. Such artefa......Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks......)1. This presentation will illustrate the application of the ID method based on a data set which consists of counts of visits by 152 pollinator species to 16 plant species. The method is based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi...... reproduce the high number of zero valued cells in the data set and mimic the sampling distribution. 1 Sørensen et al, Journal of Pollination Ecology, 6(18), 2011, pp129-139...

  5. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  6. Synthesis, spectral, DFT modeling, cytotoxicity and microbial studies of novel Zr(IV), Ce(IV) and U(VI) piroxicam complexes

    Science.gov (United States)

    El-Shwiniy, Walaa H.; Zordok, Wael A.

    2018-06-01

    The Zr(IV), Ce(IV) and U(VI) piroxicam anti-inflammatory drug complexes were prepared and characterized using elemental analyses, conductance, IR, UV-Vis, magnetic moment, IHNMR and thermal analysis. The ratio of metal: Pir is found to be 1:2 in all complexes estimated by using molar ratio method. The conductance data reveal that Zr(IV) and U(VI) chelates are non-electrolytes except Ce(IV) complex is electrolyte. Infrared spectroscopic confirm that the Pir behaves as a bidentate ligand co-ordinated to the metal ions via the oxygen and nitrogen atoms of ν(Cdbnd O)carbonyl and ν(Cdbnd N)pyridyl, respectively. The kinetic parameters of thermogravimetric and its differential, such as activation energy, entropy of activation, enthalpy of activation, and Gibbs free energy evaluated using Coats-Redfern and Horowitz-Metzger equations for Pir and complexes. The geometry of the piroxicam drug in the Free State differs significantly from that in the metal complex. In the time of metal ion-drug bond formation the drug switches-on from the closed structure (equilibrium geometry) to the open one. The antimicrobial tests were assessed towards some types of bacteria and fungi. The in vitro cell cytotoxicity of the complexes in comparison with Pir against colon carcinoma (HCT-116) cell line was measured. Optimized geometrical structure of piroxicam ligand by using DFT calculations.

  7. A quantitative risk-based model for reasoning over critical system properties

    Science.gov (United States)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  8. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  9. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  10. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  11. Towards the quantitative evaluation of visual attention models.

    Science.gov (United States)

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    J. Earth Syst. Sci. (2017) 126: 33 ... ogy, climate change, glaciology and crop models in agriculture. Different ... In areas where local topography strongly influences precipitation .... (vii) cloud amount, (viii) cloud type and (ix) sun shine hours.

  13. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  14. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Science.gov (United States)

    2012-07-17

    ... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...] Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A... Influenza Disease Models to Quantitatively Evaluate the Benefits and Risks of Vaccines: A Technical Workshop...

  15. Exploiting linkage disequilibrium in statistical modelling in quantitative genomics

    DEFF Research Database (Denmark)

    Wang, Lei

    Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...

  16. The place of quantitative energy models in a prospective approach

    International Nuclear Information System (INIS)

    Taverdet-Popiolek, N.

    2009-01-01

    Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)

  17. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model

    Directory of Open Access Journals (Sweden)

    Brent D. Winslow

    2017-04-01

    Full Text Available Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  18. First principles pharmacokinetic modeling: A quantitative study on Cyclosporin

    DEFF Research Database (Denmark)

    Mošat', Andrej; Lueshen, Eric; Heitzig, Martina

    2013-01-01

    renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...

  19. On the usability of quantitative modelling in operations strategy decission making

    NARCIS (Netherlands)

    Akkermans, H.A.; Bertrand, J.W.M.

    1997-01-01

    Quantitative modelling seems admirably suited to help managers in their strategic decision making on operations management issues, but in practice models are rarely used for this purpose. Investigates the reasons why, based on a detailed cross-case analysis of six cases of modelling-supported

  20. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    Science.gov (United States)

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  2. Evaluating quantitative and qualitative models: An application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  3. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  4. Quantitative experimental modelling of fragmentation during explosive volcanism

    Science.gov (United States)

    Thordén Haug, Ø.; Galland, O.; Gisler, G.

    2012-04-01

    Phreatomagmatic eruptions results from the violent interaction between magma and an external source of water, such as ground water or a lake. This interaction causes fragmentation of the magma and/or the host rock, resulting in coarse-grained (lapilli) to very fine-grained (ash) material. The products of phreatomagmatic explosions are classically described by their fragment size distribution, which commonly follows power laws of exponent D. Such descriptive approach, however, considers the final products only and do not provide information on the dynamics of fragmentation. The aim of this contribution is thus to address the following fundamental questions. What are the physics that govern fragmentation processes? How fragmentation occurs through time? What are the mechanisms that produce power law fragment size distributions? And what are the scaling laws that control the exponent D? To address these questions, we performed a quantitative experimental study. The setup consists of a Hele-Shaw cell filled with a layer of cohesive silica flour, at the base of which a pulse of pressurized air is injected, leading to fragmentation of the layer of flour. The fragmentation process is monitored through time using a high-speed camera. By varying systematically the air pressure (P) and the thickness of the flour layer (h) we observed two morphologies of fragmentation: "lift off" where the silica flour above the injection inlet is ejected upwards, and "channeling" where the air pierces through the layer along sub-vertical conduit. By building a phase diagram, we show that the morphology is controlled by P/dgh, where d is the density of the flour and g is the gravitational acceleration. To quantify the fragmentation process, we developed a Matlab image analysis program, which calculates the number and sizes of the fragments, and so the fragment size distribution, during the experiments. The fragment size distributions are in general described by power law distributions of

  5. Quantitative Comparison Between Crowd Models for Evacuation Planning and Evaluation

    NARCIS (Netherlands)

    Viswanathan, V.; Lee, C.E.; Lees, M.H.; Cheong, S.A.; Sloot, P.M.A.

    2014-01-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we

  6. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    Science.gov (United States)

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  7. Quantitative properties of clustering within modern microscopic nuclear models

    International Nuclear Information System (INIS)

    Volya, A.; Tchuvil’sky, Yu. M.

    2016-01-01

    A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially the possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.

  8. Quantitative Risk Modeling of Fire on the International Space Station

    Science.gov (United States)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  9. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  10. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    log K ow. These findings were validated with experimental results and by a comparison to the properties of antimalarial drugs in clinical use. For ten active compounds, nine were predicted to accumulate to a greater extent in lysosomes than in other organelles, six of these were in the optimum range...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  11. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  12. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  13. A quantitative and dynamic model for plant stem cell regulation.

    Directory of Open Access Journals (Sweden)

    Florian Geier

    Full Text Available Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.

  14. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  15. Quantitative modelling of HDPE spurt experiments using wall slip and generalised Newtonian flow

    NARCIS (Netherlands)

    Doelder, den C.F.J.; Koopmans, R.J.; Molenaar, J.

    1998-01-01

    A quantitative model to describe capillary rheometer experiments is presented. The model can generate ‘two-branched' discontinuous flow curves and the associated pressure oscillations. Polymer compressibility in the barrel, incompressible axisymmetric generalised Newtonian flow in the die, and a

  16. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  17. Communication: Recovering the flat-plane condition in electronic structure theory at semi-local DFT cost

    Science.gov (United States)

    Bajaj, Akash; Janet, Jon Paul; Kulik, Heather J.

    2017-11-01

    The flat-plane condition is the union of two exact constraints in electronic structure theory: (i) energetic piecewise linearity with fractional electron removal or addition and (ii) invariant energetics with change in electron spin in a half filled orbital. Semi-local density functional theory (DFT) fails to recover the flat plane, exhibiting convex fractional charge errors (FCE) and concave fractional spin errors (FSE) that are related to delocalization and static correlation errors. We previously showed that DFT+U eliminates FCE but now demonstrate that, like other widely employed corrections (i.e., Hartree-Fock exchange), it worsens FSE. To find an alternative strategy, we examine the shape of semi-local DFT deviations from the exact flat plane and we find this shape to be remarkably consistent across ions and molecules. We introduce the judiciously modified DFT (jmDFT) approach, wherein corrections are constructed from few-parameter, low-order functional forms that fit the shape of semi-local DFT errors. We select one such physically intuitive form and incorporate it self-consistently to correct semi-local DFT. We demonstrate on model systems that jmDFT represents the first easy-to-implement, no-overhead approach to recovering the flat plane from semi-local DFT.

  18. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  19. DFT/TDDFT study on the electronic structure and spectral properties in annulated analogue of phenyl heteroazulene derivative

    International Nuclear Information System (INIS)

    Gasiorski, P.; Danel, K.S.; Matusiewicz, M.; Uchacz, T.; Kuźnik, W.; Piatek, Ł.; Kityk, A.V.

    2012-01-01

    Highlights: ► Cyclic voltammetry study of heteroazulene derivative PTNA. ► DFT/TDDFT/PCM calculations of molecular geometry and electronic states in PTNA. ► TDDFT/PCM calculations of the absorption and fluorescence spectra in PTNA. ► Comparison between TDDFT/PCM calculated and measured optical spectra. - Abstract: Paper reports the DFT/TDDFT study on the electronic structure and spectral properties of the seven-membered annulated heteroazulene derivative 6-phenyl-6H-5,6,7-triazadibenzo[f,h]naphtho[3,2,1-cd]azulene (PTNA) by means of polarizable continuum model (PCM) and Lippert–Mataga–Onsager reaction field (LM-ORF) model at the B3LYP/6-31+G(d,p) level of theory. The results of calculations are compared with the measured optical absorption and fluorescence spectra as well as with the cyclic voltammetry data. The DFT/TDDFT methods exhibit rather good quantitative agreement regarding the spectral position of the first absorption band; the discrepancy between the experiment and theory is less than 0.1 eV. As for the fluorescence emission the TDDFT calculations underestimate the transition energy of about 0.45 eV. The discrepancy should be attributed to insufficient accuracy of the TDDFT optimization in the excited state. In the polar solvent environment, all the TDDFT/PCM approaches give the bathochromic (red) shift for the fluorescence emission and the hypsochromic (blue) shift for the optical absorption in accordance with the experimental observation. As for the fluorescence emission fairly good agreement with the experiment provides the hybrid approach being the combination of the TDDFT/PCM optimization with the semiempirical electronic structure calculations by PM3 method and solvation LM-ORF model predicting the emission energy in different solvents with the accuracy better than 0.06 eV.

  20. Quantitative modeling of gene networks of biological systems using fuzzy Petri nets and fuzzy sets

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2018-01-01

    Full Text Available Quantitative demonstrating of organic frameworks has turned into an essential computational methodology in the configuration of novel and investigation of existing natural frameworks. Be that as it may, active information that portrays the framework's elements should be known keeping in mind the end goal to get pertinent results with the routine displaying strategies. This information is frequently robust or even difficult to get. Here, we exhibit a model of quantitative fuzzy rational demonstrating approach that can adapt to obscure motor information and hence deliver applicable results despite the fact that dynamic information is fragmented or just dubiously characterized. Besides, the methodology can be utilized as a part of the blend with the current cutting edge quantitative demonstrating strategies just in specific parts of the framework, i.e., where the data are absent. The contextual analysis of the methodology suggested in this paper is performed on the model of nine-quality genes. We propose a kind of FPN model in light of fuzzy sets to manage the quantitative modeling of biological systems. The tests of our model appear that the model is practical and entirely powerful for information impersonation and thinking of fuzzy expert frameworks.

  1. A quantitative model of the cardiac ventricular cell incorporating the transverse-axial tubular system

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Christé, G.; Šimurda, J.

    2003-01-01

    Roč. 22, č. 3 (2003), s. 355-368 ISSN 0231-5882 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * tubular system * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 0.794, year: 2003

  2. Systematic Analysis of Quantitative Logic Model Ensembles Predicts Drug Combination Effects on Cell Signaling Networks

    Science.gov (United States)

    2016-08-27

    bovine serum albumin (BSA) diluted to the amount corresponding to that in the media of the stimulated cells. Phospho-JNK comprises two isoforms whose...information accompanies this paper on the CPT: Pharmacometrics & Systems Pharmacology website (http://www.wileyonlinelibrary.com/psp4) Systematic Analysis of Quantitative Logic Model Morris et al. 553 www.wileyonlinelibrary/psp4

  3. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  4. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...... of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study....

  5. Quantitative analysis of CT brain images: a statistical model incorporating partial volume and beam hardening effects

    International Nuclear Information System (INIS)

    McLoughlin, R.F.; Ryan, M.V.; Heuston, P.M.; McCoy, C.T.; Masterson, J.B.

    1992-01-01

    The purpose of this study was to construct and evaluate a statistical model for the quantitative analysis of computed tomographic brain images. Data were derived from standard sections in 34 normal studies. A model representing the intercranial pure tissue and partial volume areas, with allowance for beam hardening, was developed. The average percentage error in estimation of areas, derived from phantom tests using the model, was 28.47%. We conclude that our model is not sufficiently accurate to be of clinical use, even though allowance was made for partial volume and beam hardening effects. (author)

  6. QSPR models of n-octanol/water partition coefficients and aqueous solubility of halogenated methyl-phenyl ethers by DFT method.

    Science.gov (United States)

    Zeng, Xiao-Lan; Wang, Hong-Jun; Wang, Yan

    2012-02-01

    The possible molecular geometries of 134 halogenated methyl-phenyl ethers were optimized at B3LYP/6-31G(*) level with Gaussian 98 program. The calculated structural parameters were taken as theoretical descriptors to establish two new novel QSPR models for predicting aqueous solubility (-lgS(w,l)) and n-octanol/water partition coefficient (lgK(ow)) of halogenated methyl-phenyl ethers. The two models achieved in this work both contain three variables: energy of the lowest unoccupied molecular orbital (E(LUMO)), most positive atomic partial charge in molecule (q(+)), and quadrupole moment (Q(yy) or Q(zz)), of which R values are 0.992 and 0.970 respectively, their standard errors of estimate in modeling (SD) are 0.132 and 0.178, respectively. The results of leave-one-out (LOO) cross-validation for training set and validation with external test sets both show that the models obtained exhibited optimum stability and good predictive power. We suggests that two QSPR models derived here can be used to predict S(w,l) and K(ow) accurately for non-tested halogenated methyl-phenyl ethers congeners. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Discussions on the non-equilibrium effects in the quantitative phase field model of binary alloys

    International Nuclear Information System (INIS)

    Zhi-Jun, Wang; Jin-Cheng, Wang; Gen-Cang, Yang

    2010-01-01

    All the quantitative phase field models try to get rid of the artificial factors of solutal drag, interface diffusion and interface stretch in the diffuse interface. These artificial non-equilibrium effects due to the introducing of diffuse interface are analysed based on the thermodynamic status across the diffuse interface in the quantitative phase field model of binary alloys. Results indicate that the non-equilibrium effects are related to the negative driving force in the local region of solid side across the diffuse interface. The negative driving force results from the fact that the phase field model is derived from equilibrium condition but used to simulate the non-equilibrium solidification process. The interface thickness dependence of the non-equilibrium effects and its restriction on the large scale simulation are also discussed. (cross-disciplinary physics and related areas of science and technology)

  8. Development of quantitative atomic modeling for tungsten transport study Using LHD plasma with tungsten pellet injection

    International Nuclear Information System (INIS)

    Murakami, I.; Sakaue, H.A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2014-10-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from currentless plasmas of the Large Helical Device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) lines of W 24+ to W 33+ ions are very sensitive to electron temperature (Te) and useful to examine the tungsten behavior in edge plasmas. Based on the first quantitative analysis of measured spatial profile of W 44+ ion, the tungsten concentration is determined to be n(W 44+ )/n e = 1.4x10 -4 and the total radiation loss is estimated as ∼4 MW, of which the value is roughly half the total NBI power. (author)

  9. An analytical study of the improved nonlinear tolerance of DFT-spread OFDM and its unitary-spread OFDM generalization.

    Science.gov (United States)

    Shulkind, Gal; Nazarathy, Moshe

    2012-11-05

    DFT-spread (DFT-S) coherent optical OFDM was numerically and experimentally shown to provide improved nonlinear tolerance over an optically amplified dispersion uncompensated fiber link, relative to both conventional coherent OFDM and single-carrier transmission. Here we provide an analytic model rigorously accounting for this numerical result and precisely predicting the optimal bandwidth per DFT-S sub-band (or equivalently the optimal number of sub-bands per optical channel) required in order to maximize the link non-linear tolerance (NLT). The NLT advantage of DFT-S OFDM is traced to the particular statistical dependency introduced among the OFDM sub-carriers by means of the DFT spreading operation. We further extend DFT-S to a unitary-spread generalized modulation format which includes as special cases the DFT-S scheme as well as a new format which we refer to as wavelet-spread (WAV-S) OFDM, replacing the spreading DFTs by Hadamard matrices which have elements +/-1 hence are multiplier-free. The extra complexity incurred in the spreading operation is almost negligible, however the performance improvement with WAV-S relative to plain OFDM is more modest than that achieved by DFT-S, which remains the preferred format for nonlinear tolerance improvement, outperforming both plain OFDM and single-carrier schemes.

  10. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  11. Benchmarking the DFT+U method for thermochemical calculations of uranium molecular compounds and solids.

    Science.gov (United States)

    Beridze, George; Kowalski, Piotr M

    2014-12-18

    Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.

  12. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    Science.gov (United States)

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers

  13. Rate constants of hydroxyl radical oxidation of polychlorinated biphenyls in the gas phase: A single−descriptor based QSAR and DFT study

    International Nuclear Information System (INIS)

    Yang, Zhihui; Luo, Shuang; Wei, Zongsu; Ye, Tiantian; Spinney, Richard; Chen, Dong; Xiao, Ruiyang

    2016-01-01

    The second‒order rate constants (k) of hydroxyl radical (·OH) with polychlorinated biphenyls (PCBs) in the gas phase are of scientific and regulatory importance for assessing their global distribution and fate in the atmosphere. Due to the limited number of measured k values, there is a need to model the k values for unknown PCBs congeners. In the present study, we developed a quantitative structure–activity relationship (QSAR) model with quantum chemical descriptors using a sequential approach, including correlation analysis, principal component analysis, multi−linear regression, validation, and estimation of applicability domain. The result indicates that the single descriptor, polarizability (α), plays an important role in determining the reactivity with a global standardized function of lnk = −0.054 × α ‒ 19.49 at 298 K. In order to validate the QSAR predicted k values and expand the current k value database for PCBs congeners, an independent method, density functional theory (DFT), was employed to calculate the kinetics and thermodynamics of the gas‒phase ·OH oxidation of 2,4′,5-trichlorobiphenyl (PCB31), 2,2′,4,4′-tetrachlorobiphenyl (PCB47), 2,3,4,5,6-pentachlorobiphenyl (PCB116), 3,3′,4,4′,5,5′-hexachlorobiphenyl (PCB169), and 2,3,3′,4,5,5′,6-heptachlorobiphenyl (PCB192) at 298 K at B3LYP/6–311++G**//B3LYP/6–31 + G** level of theory. The QSAR predicted and DFT calculated k values for ·OH oxidation of these PCB congeners exhibit excellent agreement with the experimental k values, indicating the robustness and predictive power of the single–descriptor based QSAR model we developed. - Highlights: • We developed a single−descriptor based QSAR model for ·OH oxidation of PCBs. • We independently validated the QSAR predicted k values of five PCB congeners with the DFT method. • The QSAR predicted and DFT calculated k for the five PCB congeners exhibit excellent agreement. - We developed a single

  14. DFT and TD-DFT calculations of metallotetraphenylporphyrin and metallotetraphenylporphyrin fullerene complexes as potential dye sensitizers for solar cells

    Science.gov (United States)

    El Mahdy, A. M.; Halim, Shimaa Abdel; Taha, H. O.

    2018-05-01

    Density functional theory (DFT) and time-dependent DFT calculations have been employed to model metallotetraphenylporphyrin dyes and metallotetraphenylporphyrin -fullerene complexes in order to investigate the geometries, electronic structures, the density of states, non-linear optical properties (NLO), IR-vis spectra, molecular electrostatic potential contours, and electrophilicity. To calculate the excited states of the tetraphenyl porphyrin analogs, time-dependent density functional theory (TD-DFT) are used. Their UV-vis spectra were also obtained and a comparison with available experimental and theoretical results is included. The results reveal that the metal and the tertiary butyl groups of the dyes are electron donors, and the tetraphenylporphyrin rings are electron acceptors. The HOMOs of the dyes fall within the (TiO2)60 and Ti38O76 band gaps and support the issue of typical interfacial electron transfer reaction. The resulting potential drop of Mn-TPP-C60 increased by ca. 3.50% under the effect of the tertiary butyl groups. The increase in the potential drop indicates that the tertiary butyl complexes could be a better choice for the strong operation of the molecular rectifiers. The introduction of metal atom and tertiary butyl groups to the tetraphenyl porphyrin moiety leads to a stronger response to the external electric field and induces higher photo-to-current conversion efficiency. This also shifts the absorption in the dyes and makes them potential candidates for harvesting light in the entire visible and near IR region for photovoltaic applications.

  15. Quantitative Analysis of Intra Urban Growth Modeling using socio economic agents by combining cellular automata model with agent based model

    Science.gov (United States)

    Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.

    2017-12-01

    Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes

  16. A quantitative analysis of instabilities in the linear chiral sigma model

    International Nuclear Information System (INIS)

    Nemes, M.C.; Nielsen, M.; Oliveira, M.M. de; Providencia, J. da

    1990-08-01

    We present a method to construct a complete set of stationary states corresponding to small amplitude motion which naturally includes the continuum solution. The energy wheighted sum rule (EWSR) is shown to provide for a quantitative criterium on the importance of instabilities which is known to occur in nonasymptotically free theories. Out results for the linear σ model showed be valid for a large class of models. A unified description of baryon and meson properties in terms of the linear σ model is also given. (author)

  17. Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H E; Schober, H; Gonzalez, M A [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F J; Fayos, R; Dawidowski, J [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M A; Vieira, S [Universidad Autonoma de Madrid (Spain)

    1997-04-01

    The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.

  18. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  19. Parts of the Whole: Strategies for the Spread of Quantitative Literacy: What Models Can Tell Us

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2014-07-01

    Full Text Available Two conceptual frameworks, one from graph theory and one from dynamical systems, have been offered as explanations for complex phenomena in biology and also as possible models for the spread of ideas. The two models are based on different assumptions and thus predict quite different outcomes for the fate of either biological species or ideas. We argue that, depending on the culture in which they exist, one can identify which model is more likely to reflect the survival of two competing ideas. Based on this argument we suggest how two strategies for embedding and normalizing quantitative literacy in a given institution are likely to succeed or fail.

  20. A DFT and semiempirical model-based study of opioid receptor affinity and selectivity in a group of molecules with a morphine structural core.

    Science.gov (United States)

    Bruna-Larenas, Tamara; Gómez-Jeria, Juan S

    2012-01-01

    We report the results of a search for model-based relationships between mu, delta, and kappa opioid receptor binding affinity and molecular structure for a group of molecules having in common a morphine structural core. The wave functions and local reactivity indices were obtained at the ZINDO/1 and B3LYP/6-31G(∗∗) levels of theory for comparison. New developments in the expression for the drug-receptor interaction energy expression allowed several local atomic reactivity indices to be included, such as local electronic chemical potential, local hardness, and local electrophilicity. These indices, together with a new proposal for the ordering of the independent variables, were incorporated in the statistical study. We found and discussed several statistically significant relationships for mu, delta, and kappa opioid receptor binding affinity at both levels of theory. Some of the new local reactivity indices incorporated in the theory appear in several equations for the first time in the history of model-based equations. Interaction pharmacophores were generated for mu, delta, and kappa receptors. We discuss possible differences regulating binding and selectivity in opioid receptor subtypes. This study, contrarily to the statistically backed ones, is able to provide a microscopic insight of the mechanisms involved in the binding process.

  1. A DFT and Semiempirical Model-Based Study of Opioid Receptor Affinity and Selectivity in a Group of Molecules with a Morphine Structural Core

    Directory of Open Access Journals (Sweden)

    Tamara Bruna-Larenas

    2012-01-01

    Full Text Available We report the results of a search for model-based relationships between mu, delta, and kappa opioid receptor binding affinity and molecular structure for a group of molecules having in common a morphine structural core. The wave functions and local reactivity indices were obtained at the ZINDO/1 and B3LYP/6-31 levels of theory for comparison. New developments in the expression for the drug-receptor interaction energy expression allowed several local atomic reactivity indices to be included, such as local electronic chemical potential, local hardness, and local electrophilicity. These indices, together with a new proposal for the ordering of the independent variables, were incorporated in the statistical study. We found and discussed several statistically significant relationships for mu, delta, and kappa opioid receptor binding affinity at both levels of theory. Some of the new local reactivity indices incorporated in the theory appear in several equations for the first time in the history of model-based equations. Interaction pharmacophores were generated for mu, delta, and kappa receptors. We discuss possible differences regulating binding and selectivity in opioid receptor subtypes. This study, contrarily to the statistically backed ones, is able to provide a microscopic insight of the mechanisms involved in the binding process.

  2. A DFT + DMFT approach for nanosystems

    Energy Technology Data Exchange (ETDEWEB)

    Turkowski, Volodymyr; Kabir, Alamgir; Nayyar, Neha; Rahman, Talat S, E-mail: vturkows@mail.ucf.ed [Department of Physics, University of Central Florida, Orlando, FL 32816 (United States)

    2010-11-24

    We propose a combined density-functional-theory-dynamical-mean-field-theory (DFT + DMFT) approach for reliable inclusion of electron-electron correlation effects in nanosystems. Compared with the widely used DFT + U approach, this method has several advantages, the most important of which is that it takes into account dynamical correlation effects. The formalism is illustrated through different calculations of the magnetic properties of a set of small iron clusters (number of atoms 2 {<=} N {<=} 5). It is shown that the inclusion of dynamical effects leads to a reduction in the cluster magnetization (as compared to results from DFT + U) and that, even for such small clusters, the magnetization values agree well with experimental estimations. These results justify confidence in the ability of the method to accurately describe the magnetic properties of clusters of interest to nanoscience. (fast track communication)

  3. A DFT + DMFT approach for nanosystems

    International Nuclear Information System (INIS)

    Turkowski, Volodymyr; Kabir, Alamgir; Nayyar, Neha; Rahman, Talat S

    2010-01-01

    We propose a combined density-functional-theory-dynamical-mean-field-theory (DFT + DMFT) approach for reliable inclusion of electron-electron correlation effects in nanosystems. Compared with the widely used DFT + U approach, this method has several advantages, the most important of which is that it takes into account dynamical correlation effects. The formalism is illustrated through different calculations of the magnetic properties of a set of small iron clusters (number of atoms 2 ≤ N ≤ 5). It is shown that the inclusion of dynamical effects leads to a reduction in the cluster magnetization (as compared to results from DFT + U) and that, even for such small clusters, the magnetization values agree well with experimental estimations. These results justify confidence in the ability of the method to accurately describe the magnetic properties of clusters of interest to nanoscience. (fast track communication)

  4. A quantitative and dynamic model of the Arabidopsis flowering time gene regulatory network.

    Directory of Open Access Journals (Sweden)

    Felipe Leal Valentim

    Full Text Available Various environmental signals integrate into a network of floral regulatory genes leading to the final decision on when to flower. Although a wealth of qualitative knowledge is available on how flowering time genes regulate each other, only a few studies incorporated this knowledge into predictive models. Such models are invaluable as they enable to investigate how various types of inputs are combined to give a quantitative readout. To investigate the effect of gene expression disturbances on flowering time, we developed a dynamic model for the regulation of flowering time in Arabidopsis thaliana. Model parameters were estimated based on expression time-courses for relevant genes, and a consistent set of flowering times for plants of various genetic backgrounds. Validation was performed by predicting changes in expression level in mutant backgrounds and comparing these predictions with independent expression data, and by comparison of predicted and experimental flowering times for several double mutants. Remarkably, the model predicts that a disturbance in a particular gene has not necessarily the largest impact on directly connected genes. For example, the model predicts that SUPPRESSOR OF OVEREXPRESSION OF CONSTANS (SOC1 mutation has a larger impact on APETALA1 (AP1, which is not directly regulated by SOC1, compared to its effect on LEAFY (LFY which is under direct control of SOC1. This was confirmed by expression data. Another model prediction involves the importance of cooperativity in the regulation of APETALA1 (AP1 by LFY, a prediction supported by experimental evidence. Concluding, our model for flowering time gene regulation enables to address how different quantitative inputs are combined into one quantitative output, flowering time.

  5. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    Science.gov (United States)

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  6. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    Science.gov (United States)

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  7. Refining the statistical model for quantitative immunostaining of surface-functionalized nanoparticles by AFM.

    Science.gov (United States)

    MacCuspie, Robert I; Gorka, Danielle E

    2013-10-01

    Recently, an atomic force microscopy (AFM)-based approach for quantifying the number of biological molecules conjugated to a nanoparticle surface at low number densities was reported. The number of target molecules conjugated to the analyte nanoparticle can be determined with single nanoparticle fidelity using antibody-mediated self-assembly to decorate the analyte nanoparticles with probe nanoparticles (i.e., quantitative immunostaining). This work refines the statistical models used to quantitatively interpret the observations when AFM is used to image the resulting structures. The refinements add terms to the previous statistical models to account for the physical sizes of the analyte nanoparticles, conjugated molecules, antibodies, and probe nanoparticles. Thus, a more physically realistic statistical computation can be implemented for a given sample of known qualitative composition, using the software scripts provided. Example AFM data sets, using horseradish peroxidase conjugated to gold nanoparticles, are presented to illustrate how to implement this method successfully.

  8. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  9. Integration of CFD codes and advanced combustion models for quantitative burnout determination

    Energy Technology Data Exchange (ETDEWEB)

    Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)

    2007-10-15

    CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.

  10. Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.

    Science.gov (United States)

    Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L

    2017-10-01

    The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (pEEG patterns such as generalized periodic discharges (pEEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Variable selection in near infrared spectroscopy for quantitative models of homologous analogs of cephalosporins

    Directory of Open Access Journals (Sweden)

    Yan-Chun Feng

    2014-07-01

    Full Text Available Two universal spectral ranges (4550–4100 cm-1 and 6190–5510 cm-1 for construction of quantitative models of homologous analogs of cephalosporins were proposed by evaluating the performance of five spectral ranges and their combinations, using three data sets of cephalosporins for injection, i.e., cefuroxime sodium, ceftriaxone sodium and cefoperazone sodium. Subsequently, the proposed ranges were validated by using eight calibration sets of other homologous analogs of cephalosporins for injection, namely cefmenoxime hydrochloride, ceftezole sodium, cefmetazole, cefoxitin sodium, cefotaxime sodium, cefradine, cephazolin sodium and ceftizoxime sodium. All the constructed quantitative models for the eight kinds of cephalosporins using these universal ranges could fulfill the requirements for quick quantification. After that, competitive adaptive reweighted sampling (CARS algorithm and infrared (IR–near infrared (NIR two-dimensional (2D correlation spectral analysis were used to determine the scientific basis of these two spectral ranges as the universal regions for the construction of quantitative models of cephalosporins. The CARS algorithm demonstrated that the ranges of 4550–4100 cm-1 and 6190–5510 cm-1 included some key wavenumbers which could be attributed to content changes of cephalosporins. The IR–NIR 2D spectral analysis showed that certain wavenumbers in these two regions have strong correlations to the structures of those cephalosporins that were easy to degrade.

  12. Spectroscopy and DFT studies of uranyl carbonate, rutherfordine, UO2CO3: a model for uranium transport, carbon dioxide sequestration, and seawater species

    Science.gov (United States)

    Kalashnyk, N.; Perry, D. L.; Massuyeau, F.; Faulques, E.

    2017-12-01

    Several optical microprobe experiments of the anhydrous uranium carbonate—rutherfordine—are presented in this work and compared to periodic density functional theory results. Rutherfordine is the simplest uranyl carbonate and constitutes an ideal model system for the study of the rich uranium carbonate family relevant for environmental sustainability. Micro-Raman, micro-reflectance, and micro-photoluminescence (PL) spectroscopy studies have been carried out in situ on native, micrometer-sized crystals. The sensitivity of these techniques is sufficient to analyze minute amounts of samples in natural environments without using x-ray analysis. In addition, very intense micro-PL and micro-reflectance spectra that were not reported before add new results on the ground and excited states of this mineral. The optical gap value determined experimentally is found at about 2.6-2.8 eV. Optimized geometry, band structure, and phonon spectra have been calculated. The main vibrational lines are identified and predicted by this theoretical study. This work is pertinent for optical spectroscopy, for identification of uranyl species in various environmental settings, and for nuclear forensic analysis.

  13. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    Science.gov (United States)

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  14. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  15. A quantitative dynamic systems model of health-related quality of life among older adults

    Science.gov (United States)

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  16. Fixing the cracks in the crystal ball: A maturity model for quantitative risk assessment

    International Nuclear Information System (INIS)

    Rae, Andrew; Alexander, Rob; McDermid, John

    2014-01-01

    Quantitative risk assessment (QRA) is widely practiced in system safety, but there is insufficient evidence that QRA in general is fit for purpose. Defenders of QRA draw a distinction between poor or misused QRA and correct, appropriately used QRA, but this distinction is only useful if we have robust ways to identify the flaws in an individual QRA. In this paper we present a comprehensive maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature and in a collection of risk assessment peer reviews. We provide initial validation of the completeness and realism of the model. Our risk assessment maturity model provides a way to prioritise both process development within an organisation and empirical research within the QRA community. - Highlights: • Quantitative risk assessment (QRA) is widely practiced, but there is insufficient evidence that it is fit for purpose. • A given QRA may be good, or it may not – we need systematic ways to distinguish this. • We have created a maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature. • We have provided initial validation of the completeness and realism of the model. • The maturity model can also be used to prioritise QRA research discipline-wide

  17. Synthesis, Spectroscopic Properties and DFT Calculation of Novel ...

    Indian Academy of Sciences (India)

    density functional theory (DFT) calculations. Keywords. ... time-dependent density functional theory (TD-DFT) calcu- lations. .... reaction, the pH of the solution was adjusted to 7 .... ORTEP diagram for L1 showing 30% probability ellipsoids.

  18. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    Science.gov (United States)

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (pquantitative DCE-MRI, the quantitative DCE-MRI parameter K trans is a more sensitive and accurate index for detecting early reduced perfusion in osteoporotic bone.

  19. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    Science.gov (United States)

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  20. Enhanced free energy of extraction of Eu3+ and Am3+ ions towards diglycolamide appended calix[4]arene: insights from DFT-D3 and COSMO-RS solvation models.

    Science.gov (United States)

    Ali, Sk Musharaf

    2017-08-22

    Density functional theory in conjunction with COSMO and COSMO-RS solvation models employing dispersion correction (DFT-D3) has been applied to gain an insight into the complexation of Eu 3+ /Am 3+ with diglycolamide (DGA) and calix[4]arene appended diglycolamide (CAL4DGA) in ionic liquids by studying structures, energetics, thermodynamics and population analysis. The calculated Gibbs free energy for both Eu 3+ and Am 3+ ions with DGA was found to be smaller than that with CAL4DGA. The entropy of complexation was also found to be reduced to a large extent with DGA compared to complexation with CAL4DGA. The solution phase free energy was found to be negative and was higher for Eu 3+ ion. The entropy of complexation was not only found to be further reduced but also became negative in the case of DGA alone. Though the entropy was found to be negative it could not outweigh the high negative enthalpic contribution. The same trend was observed in the solution where the free energy of extraction, ΔG, for Eu 3+ ions was shown to be higher than that for Am 3+ ions towards free DGA. But the values of ΔG and ΔΔG(= ΔG Eu -ΔG Am ) were found to be much higher with CAL4DGA (-12.58 kcal mol -1 ) in the presence of nitrate ions compared to DGA (-1.69 kcal mol -1 ) due to enhanced electronic interaction and positive entropic contribution. Furthermore, both the COSMO and COSMO-RS models predict very close values of ΔΔΔG (= ΔΔG CAL4DGA - ΔΔG nDGA ), indicating that both solvation models could be applied for evaluating the metal ion selectivity. The value of the reaction free energy was found to be higher after dispersion correction. The charge on the Eu and Am atoms for the complexes with DGA and CAL4DGA indicates the charge-dipole type interaction leading to strong binding energy. The present theoretical results support the experimental findings and thus might be of importance in the design of functionalized ligands.

  1. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  2. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  3. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Science.gov (United States)

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  4. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  5. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  6. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong; Zhao, Weishu; Chang, Frank; Dyer, Steve

    2013-01-01

    Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  7. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  8. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    Science.gov (United States)

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  9. A semi-quantitative model for risk appreciation and risk weighing

    DEFF Research Database (Denmark)

    Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko

    2009-01-01

    Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... detailed information on the estimated health impact in a given exposure situation. These graphs will facilitate the discussions on appropriate risk reduction measures to be taken....

  10. An ab initio and TD DFT

    Indian Academy of Sciences (India)

    The photophysical behaviour of N-(2-hydroxy benzylidene) aniline or most commonly known as salicylideneaniline (SA) has been investigated using the ab initio and DFT levels of theory. The quantum chemical calculations show that the optimized non planar enol (1) form of the SA molecule is the most stable conformer ...

  11. Fourier Series, the DFT and Shape Modelling

    DEFF Research Database (Denmark)

    Skoglund, Karl

    2004-01-01

    This report provides an introduction to Fourier series, the discrete Fourier transform, complex geometry and Fourier descriptors for shape analysis. The content is aimed at undergraduate and graduate students who wish to learn about Fourier analysis in general, as well as its application to shape...

  12. Reaction pathways of the dissociation of methylal: A DFT study

    Energy Technology Data Exchange (ETDEWEB)

    Frey, H -M; Beaud, P; Gerber, T; Mischler, B; Radi, P P; Tzannis, A -P [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Schemata for modelling combustion processes do not yet include reaction rates for oxygenated fuels like methylal (DMM) which is considered as an additive or replacement for diesel due to its low sooting propensity. Density functional theory (DFT) studies of the possible reaction pathways for different dissociation steps of methylal are presented. Cleavage of a hydrogen bond to the methoxy group or the central carbon atom were simulated at the BLYP/6-311++G{sup **} level of theory. The results are compared to the experiment when dissociating and/or ionising DMM with femtosecond pulses. (author) 1 fig., 1 tab., 1 ref.

  13. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models.

    Science.gov (United States)

    Allen, R J; Rieger, T R; Musante, C J

    2016-03-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.

  14. Quantitative assessment of manual and robotic microcannulation for eye surgery using new eye model.

    Science.gov (United States)

    Tanaka, Shinichi; Harada, Kanako; Ida, Yoshiki; Tomita, Kyohei; Kato, Ippei; Arai, Fumihito; Ueta, Takashi; Noda, Yasuo; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-06-01

    Microcannulation, a surgical procedure for the eye that requires drug injection into a 60-90 µm retinal vein, is difficult to perform manually. Robotic assistance has been proposed; however, its effectiveness in comparison to manual operation has not been quantified. An eye model has been developed to quantify the performance of manual and robotic microcannulation. The eye model, which is implemented with a force sensor and microchannels, also simulates the mechanical constraints of the instrument's movement. Ten subjects performed microcannulation using the model, with and without robotic assistance. The results showed that the robotic assistance was useful for motion stability when the drug was injected, whereas its positioning accuracy offered no advantage. An eye model was used to quantitatively assess the robotic microcannulation performance in comparison to manual operation. This approach could be valid for a better evaluation of surgical robotic assistance. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Quantitative Decision Making Model for Carbon Reduction in Road Construction Projects Using Green Technologies

    Directory of Open Access Journals (Sweden)

    Woosik Jang

    2015-08-01

    Full Text Available Numerous countries have established policies for reducing greenhouse gas emissions and have suggested goals pertaining to these reductions. To reach the target reduction amounts, studies on the reduction of carbon emissions have been conducted with regard to all stages and processes in construction projects. According to a study on carbon emissions, the carbon emissions generated during the construction stage of road projects account for approximately 76 to 86% of the total carbon emissions, far exceeding the other stages, such as maintenance or demolition. Therefore, this study aims to develop a quantitative decision making model that supports the application of green technologies (GTs to reduce carbon emissions during the construction stage of road construction projects. First, the authors selected environmental soundness, economic feasibility and constructability as the key assessment indices for evaluating 20 GTs. Second, a fuzzy set/qualitative comparative analysis (FS/QCA was used to establish an objective decision-making model for the assessment of both the quantitative and qualitative characteristics of the key indices. To support the developed model, an expert survey was performed to assess the applicability of each GT from a practical perspective, which was verified with a case study using two additional GTs. The proposed model is expected to support practitioners in the application of suitable GTs to road projects and reduce carbon emissions, resulting in better decision making during road construction projects.

  16. [A quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse].

    Science.gov (United States)

    Zhang, Yu; Chen, Yuzhen; Hu, Chunguang; Zhang, Huaning; Bi, Zhenwang; Bi, Zhenqiang

    2015-05-01

    To construct a quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse and to find out effective interventions to reduce salmonella contamination. We constructed a modular process risk model (MPRM) from evisceration to chilling in Excel Sheet using the data of the process parameters in poultry and the Salmomella concentration surveillance of Jinan in 2012. The MPRM was simulated by @ risk software. The concentration of salmonella on carcass after chilling was 1.96MPN/g which was calculated by model. The sensitive analysis indicated that the correlation coefficient of the concentration of salmonella after defeathering and in chilling pool were 0.84 and 0.34,which were the primary factors to the concentration of salmonella on carcass after chilling. The study provided a quantitative assessment model structure for salmonella on carcass in poultry slaughterhouse. The risk manager could control the contamination of salmonella on carcass after chilling by reducing the concentration of salmonella after defeathering and in chilling pool.

  17. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  18. Quantitative stress measurement of elastic deformation using mechanoluminescent sensor: An intensity ratio model

    Science.gov (United States)

    Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng

    2018-04-01

    The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.

  19. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    Directory of Open Access Journals (Sweden)

    Mondry Adrian

    2004-08-01

    Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods

  20. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  1. Quantitative laser diagnostic and modeling study of C2 and CH chemistry in combustion.

    Science.gov (United States)

    Köhler, Markus; Brockhinke, Andreas; Braun-Unkhoff, Marina; Kohse-Höinghaus, Katharina

    2010-04-15

    Quantitative concentration measurements of CH and C(2) have been performed in laminar, premixed, flat flames of propene and cyclopentene with varying stoichiometry. A combination of cavity ring-down (CRD) spectroscopy and laser-induced fluorescence (LIF) was used to enable sensitive detection of these species with high spatial resolution. Previously, CH and C(2) chemistry had been studied, predominantly in methane flames, to understand potential correlations of their formation and consumption. For flames of larger hydrocarbon fuels, however, quantitative information on these small intermediates is scarce, especially under fuel-rich conditions. Also, the combustion chemistry of C(2) in particular has not been studied in detail, and although it has often been observed, its role in potential build-up reactions of higher hydrocarbon species is not well understood. The quantitative measurements performed here are the first to detect both species with good spatial resolution and high sensitivity in the same experiment in flames of C(3) and C(5) fuels. The experimental profiles were compared with results of combustion modeling to reveal details of the formation and consumption of these important combustion molecules, and the investigation was devoted to assist the further understanding of the role of C(2) and of its potential chemical interdependences with CH and other small radicals.

  2. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D)

    DEFF Research Database (Denmark)

    van de Streek, Jacco; Neumann, Marcus A

    2014-01-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published...

  3. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  4. DFT study of the mechanism and stereoselectivity of the 1,3-dipolar ...

    Indian Academy of Sciences (India)

    and methyl acrylate) using DFT method. An ana- lysis of ..... field (SCRF)30,46 model based on the polarizable con- tinuum model (PCM) of Tomasi's group47 have been applied. ... stereoselectivity relative to the gas-phase since the trends of ...

  5. Tumour-cell killing by X-rays and immunity quantitated in a mouse model system

    International Nuclear Information System (INIS)

    Porteous, D.D.; Porteous, K.M.; Hughes, M.J.

    1979-01-01

    As part of an investigation of the interaction of X-rays and immune cytotoxicity in tumour control, an experimental mouse model system has been used in which quantitative anti-tumour immunity was raised in prospective recipients of tumour-cell suspensions exposed to varying doses of X-rays in vitro before injection. Findings reported here indicate that, whilst X-rays kill a proportion of cells, induced immunity deals with a fixed number dependent upon the immune status of the host, and that X-rays and anti-tumour immunity do not act synergistically in tumour-cell killing. The tumour used was the ascites sarcoma BP8. (author)

  6. Impact Assessment of Abiotic Resources in LCA: Quantitative Comparison of Selected Characterization Models

    DEFF Research Database (Denmark)

    Rørbech, Jakob Thaysen; Vadenbo, Carl; Hellweg, Stefanie

    2014-01-01

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment...... results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247...... groups, according to method focus and modeling approach, to aid method selection within LCA....

  7. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  8. A quantitative speciation model for the adsorption of organic pollutants on activated carbon.

    Science.gov (United States)

    Grivé, M; García, D; Domènech, C; Richard, L; Rojo, I; Martínez, X; Rovira, M

    2013-01-01

    Granular activated carbon (GAC) is commonly used as adsorbent in water treatment plants given its high capacity for retaining organic pollutants in aqueous phase. The current knowledge on GAC behaviour is essentially empirical, and no quantitative description of the chemical relationships between GAC surface groups and pollutants has been proposed. In this paper, we describe a quantitative model for the adsorption of atrazine onto GAC surface. The model is based on results of potentiometric titrations and three types of adsorption experiments which have been carried out in order to determine the nature and distribution of the functional groups on the GAC surface, and evaluate the adsorption characteristics of GAC towards atrazine. Potentiometric titrations have indicated the existence of at least two different families of chemical groups on the GAC surface, including phenolic- and benzoic-type surface groups. Adsorption experiments with atrazine have been satisfactorily modelled with the geochemical code PhreeqC, assuming that atrazine is sorbed onto the GAC surface in equilibrium (log Ks = 5.1 ± 0.5). Independent thermodynamic calculations suggest a possible adsorption of atrazine on a benzoic derivative. The present work opens a new approach for improving the adsorption capabilities of GAC towards organic pollutants by modifying its chemical properties.

  9. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  10. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.

  11. Spatiotemporal microbiota dynamics from quantitative in vitro and in silico models of the gut

    Science.gov (United States)

    Hwa, Terence

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth behaviors, which ultimately dictate the gut microbiota composition. Combining measurements of bacterial growth physiology with analysis of published data on human physiology into a quantitative modeling framework, we show how hydrodynamic forces in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla in the gut. Our model quantitatively explains the observed variation of microbiota composition among healthy adults, and predicts colonic water absorption (manifested as stool consistency) and nutrient intake to be two key factors determining this composition. The model further reveals that both factors, which have been identified in recent correlative studies, exert their effects through the same mechanism: changes in colonic pH that differentially affect the growth of different bacteria. Our findings show that a predictive and mechanistic understanding of microbial ecology in the human gut is possible, and offer the hope for the rational design of intervention strategies to actively control the microbiota. This work is supported by the Bill and Melinda Gates Foundation.

  12. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    International Nuclear Information System (INIS)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity

  13. Web Applications Vulnerability Management using a Quantitative Stochastic Risk Modeling Method

    Directory of Open Access Journals (Sweden)

    Sergiu SECHEL

    2017-01-01

    Full Text Available The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed de-bugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk charac-teristics (risk attributes and evaluate the results after 1.5 million rounds of Monte Carlo simu-lations.

  14. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  15. A quantitative microbial risk assessment model for Listeria monocytogenes in RTE sandwiches

    DEFF Research Database (Denmark)

    Tirloni, E.; Stella, S.; de Knegt, Leonardo

    2018-01-01

    within each serving. Then, two dose-response models were alternatively applied: the first used a fixed r value for each of the three population groups, while the second considered a variable r value (lognormal distribution), taking into account the variability in strain virulence and different host...... subpopulations susceptibility. The stochastic model predicted zero cases for total population for both the substrates by using the fixed r approach, while 3 cases were expected when a higher variability (in virulence and susceptibility) was considered in the model; the number of cases increased to 45......A Quantitative Microbial Risk Assessment (QMRA) was performed to estimate the expected number of listeriosis cases due to the consumption, on the last day of shelf life, of 20 000 servings of multi-ingredient sandwiches produced by a medium scale food producer in Italy, by different population...

  16. Quantitative model for the blood pressure‐lowering interaction of valsartan and amlodipine

    Science.gov (United States)

    Heo, Young‐A; Holford, Nick; Kim, Yukyung; Son, Mijeong

    2016-01-01

    Aims The objective of this study was to develop a population pharmacokinetic (PK) and pharmacodynamic (PD) model to quantitatively describe the antihypertensive effect of combined therapy with amlodipine and valsartan. Methods PK modelling was used with data collected from 48 healthy volunteers receiving a single dose of combined formulation of 10 mg amlodipine and 160 mg valsartan. Systolic (SBP) and diastolic blood pressure (DBP) were recorded during combined administration. SBP and DBP data for each drug alone were gathered from the literature. PKPD models of each drug and for combined administration were built with NONMEM 7.3. Results A two‐compartment model with zero order absorption best described the PK data of both drugs. Amlodipine and valsartan monotherapy effects on SBP and DBP were best described by an I max model with an effect compartment delay. Combined therapy was described using a proportional interaction term as follows: (D1 + D2) +ALPHA×(D1 × D2). D1 and D2 are the predicted drug effects of amlodipine and valsartan monotherapy respectively. ALPHA is the interaction term for combined therapy. Quantitative estimates of ALPHA were −0.171 (95% CI: −0.218, −0.143) for SBP and −0.0312 (95% CI: −0.07739, −0.00283) for DBP. These infra‐additive interaction terms for both SBP and DBP were consistent with literature results for combined administration of drugs in these classes. Conclusion PKPD models for SBP and DBP successfully described the time course of the antihypertensive effects of amlodipine and valsartan. An infra‐additive interaction between amlodipine and valsartan when used in combined administration was confirmed and quantified. PMID:27504853

  17. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  18. Advantages of GPU technology in DFT calculations of intercalated graphene

    Science.gov (United States)

    Pešić, J.; Gajić, R.

    2014-09-01

    Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an

  19. Advantages of GPU technology in DFT calculations of intercalated graphene

    International Nuclear Information System (INIS)

    Pešić, J; Gajić, R

    2014-01-01

    Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an

  20. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    Science.gov (United States)

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...

  1. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography

    DEFF Research Database (Denmark)

    Precht, Helle; Kitslaar, Pieter H.; Broersen, Alexander

    2017-01-01

    Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model- based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) im- ages on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods...

  2. Development of quantitative atomic modeling for tungsten transport study using LHD plasma with tungsten pellet injection

    Science.gov (United States)

    Murakami, I.; Sakaue, H. A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2015-09-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from plasmas of the large helical device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) emission of W24+ to W33+ ions at 1.5-3.5 nm are sensitive to electron temperature and useful to examine the tungsten behavior in edge plasmas. We can reproduce measured EUV spectra at 1.5-3.5 nm by calculated spectra with the tungsten atomic model and obtain charge state distributions of tungsten ions in LHD plasmas at different temperatures around 1 keV. Our model is applied to calculate the unresolved transition array (UTA) seen at 4.5-7 nm tungsten spectra. We analyze the effect of configuration interaction on population kinetics related to the UTA structure in detail and find the importance of two-electron-one-photon transitions between 4p54dn+1- 4p64dn-14f. Radiation power rate of tungsten due to line emissions is also estimated with the model and is consistent with other models within factor 2.

  3. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  4. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong

    2013-01-01

    Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  5. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  6. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  7. Quantitative Outline-based Shape Analysis and Classification of Planetary Craterforms using Supervised Learning Models

    Science.gov (United States)

    Slezak, Thomas Joseph; Radebaugh, Jani; Christiansen, Eric

    2017-10-01

    The shapes of craterform morphology on planetary surfaces provides rich information about their origins and evolution. While morphologic information provides rich visual clues to geologic processes and properties, the ability to quantitatively communicate this information is less easily accomplished. This study examines the morphology of craterforms using the quantitative outline-based shape methods of geometric morphometrics, commonly used in biology and paleontology. We examine and compare landforms on planetary surfaces using shape, a property of morphology that is invariant to translation, rotation, and size. We quantify the shapes of paterae on Io, martian calderas, terrestrial basaltic shield calderas, terrestrial ash-flow calderas, and lunar impact craters using elliptic Fourier analysis (EFA) and the Zahn and Roskies (Z-R) shape function, or tangent angle approach to produce multivariate shape descriptors. These shape descriptors are subjected to multivariate statistical analysis including canonical variate analysis (CVA), a multiple-comparison variant of discriminant analysis, to investigate the link between craterform shape and classification. Paterae on Io are most similar in shape to terrestrial ash-flow calderas and the shapes of terrestrial basaltic shield volcanoes are most similar to martian calderas. The shapes of lunar impact craters, including simple, transitional, and complex morphology, are classified with a 100% rate of success in all models. Multiple CVA models effectively predict and classify different craterforms using shape-based identification and demonstrate significant potential for use in the analysis of planetary surfaces.

  8. Quantitative evaluation and modeling of two-dimensional neovascular network complexity: the surface fractal dimension

    International Nuclear Information System (INIS)

    Grizzi, Fabio; Russo, Carlo; Colombo, Piergiuseppe; Franceschini, Barbara; Frezza, Eldo E; Cobos, Everardo; Chiriva-Internati, Maurizio

    2005-01-01

    Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. This paper introduces the surface fractal dimension (D s ) as a numerical index of the two-dimensional (2-D) geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. We show that D s significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth

  9. Preclinical Magnetic Resonance Fingerprinting (MRF) at 7 T: Effective Quantitative Imaging for Rodent Disease Models

    Science.gov (United States)

    Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A.; Vincent, Jason A.; Dell, Katherine M.; Drumm, Mitchell L.; Brady-Kalnay, Susann M.; Griswold, Mark A.; Flask, Chris A.; Lu, Lan

    2015-01-01

    High field, preclinical magnetic resonance imaging (MRI) scanners are now commonly used to quantitatively assess disease status and efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical, 7.0 T MRI implementation of the highly novel Magnetic Resonance Fingerprinting (MRF) methodology that has been previously described for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a Fast Imaging with Steady-state Free Precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 minutes. This initial high field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. PMID:25639694

  10. A Cytomorphic Chip for Quantitative Modeling of Fundamental Bio-Molecular Circuits.

    Science.gov (United States)

    2015-08-01

    We describe a 0.35 μm BiCMOS silicon chip that quantitatively models fundamental molecular circuits via efficient log-domain cytomorphic transistor equivalents. These circuits include those for biochemical binding with automatic representation of non-modular and loading behavior, e.g., in cascade and fan-out topologies; for representing variable Hill-coefficient operation and cooperative binding; for representing inducer, transcription-factor, and DNA binding; for probabilistic gene transcription with analogic representations of log-linear and saturating operation; for gain, degradation, and dynamics of mRNA and protein variables in transcription and translation; and, for faithfully representing biological noise via tunable stochastic transistor circuits. The use of on-chip DACs and ADCs enables multiple chips to interact via incoming and outgoing molecular digital data packets and thus create scalable biochemical reaction networks. The use of off-chip digital processors and on-chip digital memory enables programmable connectivity and parameter storage. We show that published static and dynamic MATLAB models of synthetic biological circuits including repressilators, feed-forward loops, and feedback oscillators are in excellent quantitative agreement with those from transistor circuits on the chip. Computationally intensive stochastic Gillespie simulations of molecular production are also rapidly reproduced by the chip and can be reliably tuned over the range of signal-to-noise ratios observed in biological cells.

  11. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  12. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  13. Quantitative Circulatory Physiology: an integrative mathematical model of human physiology for medical education.

    Science.gov (United States)

    Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L

    2007-06-01

    We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.

  14. Plutonium chemistry: a synthesis of experimental data and a quantitative model for plutonium oxide solubility

    International Nuclear Information System (INIS)

    Haschke, J.M.; Oversby, V.M.

    2002-01-01

    The chemistry of plutonium is important for assessing potential behavior of radioactive waste under conditions of geologic disposal. This paper reviews experimental data on dissolution of plutonium oxide solids, describes a hybrid kinetic-equilibrium model for predicting steady-state Pu concentrations, and compares laboratory results with predicted Pu concentrations and oxidation-state distributions. The model is based on oxidation of PuO 2 by water to produce PuO 2+x , an oxide that can release Pu(V) to solution. Kinetic relationships between formation of PuO 2+x , dissolution of Pu(V), disproportionation of Pu(V) to Pu(IV) and Pu(VI), and reduction of Pu(VI) are given and used in model calculations. Data from tests of pyrochemical salt wastes in brines are discussed and interpreted using the conceptual model. Essential data for quantitative modeling at conditions relevant to nuclear waste repositories are identified and laboratory experiments to determine rate constants for use in the model are discussed

  15. A pulsatile flow model for in vitro quantitative evaluation of prosthetic valve regurgitation

    Directory of Open Access Journals (Sweden)

    S. Giuliatti

    2000-03-01

    Full Text Available A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.

  16. Quantitative models for predicting adsorption of oxytetracycline, ciprofloxacin and sulfamerazine to swine manures with contrasting properties.

    Science.gov (United States)

    Cheng, Dengmiao; Feng, Yao; Liu, Yuanwang; Li, Jinpeng; Xue, Jianming; Li, Zhaojun

    2018-09-01

    Understanding antibiotic adsorption in livestock manures is crucial to assess the fate and risk of antibiotics in the environment. In this study, three quantitative models developed with swine manure-water distribution coefficients (LgK d ) for oxytetracycline (OTC), ciprofloxacin (CIP) and sulfamerazine (SM1) in swine manures. Physicochemical parameters (n=12) of the swine manure were used as independent variables using partial least-squares (PLSs) analysis. The cumulative cross-validated regression coefficients (Q 2 cum ) values, standard deviations (SDs) and external validation coefficient (Q 2 ext ) ranged from 0.761 to 0.868, 0.027 to 0.064, and 0.743 to 0.827 for the three models; as such, internal and external predictability of the models were strong. The pH, soluble organic carbon (SOC) and nitrogen (SON), and Ca were important explanatory variables for the OTC-Model, pH, SOC, and SON for the CIP-model, and pH, total organic nitrogen (TON), and SOC for the SM1-model. The high VIPs (variable importance in the projections) of pH (1.178-1.396), SOC (0.968-1.034), and SON (0.822 and 0.865) established these physicochemical parameters as likely being dominant (associatively) in affecting transport of antibiotics in swine manures. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  18. Optical Gaps in Pristine and Heavily Doped Silicon Nanocrystals: DFT versus Quantum Monte Carlo Benchmarks.

    Science.gov (United States)

    Derian, R; Tokár, K; Somogyi, B; Gali, Á; Štich, I

    2017-12-12

    We present a time-dependent density functional theory (TDDFT) study of the optical gaps of light-emitting nanomaterials, namely, pristine and heavily B- and P-codoped silicon crystalline nanoparticles. Twenty DFT exchange-correlation functionals sampled from the best currently available inventory such as hybrids and range-separated hybrids are benchmarked against ultra-accurate quantum Monte Carlo results on small model Si nanocrystals. Overall, the range-separated hybrids are found to perform best. The quality of the DFT gaps is correlated with the deviation from Koopmans' theorem as a possible quality guide. In addition to providing a generic test of the ability of TDDFT to describe optical properties of silicon crystalline nanoparticles, the results also open up a route to benchmark-quality DFT studies of nanoparticle sizes approaching those studied experimentally.

  19. [Quantitative models between canopy hyperspectrum and its component features at apple tree prosperous fruit stage].

    Science.gov (United States)

    Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang

    2010-10-01

    Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.

  20. Use of a plant level logic model for quantitative assessment of systems interactions

    International Nuclear Information System (INIS)

    Chu, B.B.; Rees, D.C.; Kripps, L.P.; Hunt, R.N.; Bradley, M.

    1985-01-01

    The Electric Power Research Institute (EPRI) has sponsored a research program to investigate methods for identifying systems interactions (SIs) and for the evaluation of their importance. Phase 1 of the EPRI research project focused on the evaluation of methods for identification of SIs. Major results of the Phase 1 activities are the documentation of four different methodologies for identification of potential SIs and development of guidelines for performing an effective plant walkdown in support of an SI analysis. Phase II of the project, currently being performed, is utilizing a plant level logic model of a pressurized water reactor (PWR) to determine the quantitative importance of identified SIs. In Phase II, previously reported events involving interactions between systems were screened and selected on the basis of their relevance to the Baltimore Gas and Electric (BGandE) Calvert Cliffs Nuclear Power Plant design and perceived potential safety significance. Selected events were then incorporated into the BGandE plant level GO logic model. The model is being exercised to calculate the relative importance of these events. Five previously identified event scenarios, extracted from licensee event reports (LERs) are being evaluated during the course of the study. A key feature of the approach being used in Phase II is the use of a logic model in a manner to effectively evaluate the impact of events on the system level and the plant level for the mitigation of transients. Preliminary study results indicate that the developed methodology can be a viable and effective means for determining the quantitative significance of SIs

  1. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  2. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    Science.gov (United States)

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  3. Need for collection of quantitative distribution data for dosimetry and metabolic modeling

    International Nuclear Information System (INIS)

    Lathrop, K.A.

    1976-01-01

    Problems in radiation dose distribution studies in humans are discussed. Data show the effective half-times for 7 Be and 75 Se in the mouse, rat, monkey, dog, and human show no correlation with weight, body surface, or other readily apparent factor that could be used to equate nonhuman and human data. Another problem sometimes encountered in attempting to extrapolate animal data to humans involves equivalent doses of the radiopharmaceutical. A usual human dose for a radiopharmaceutical is 1 ml or 0.017 mg/kg. The same solution injected into a mouse in a convenient volume of 0.1 ml results in a dose of 4 ml/kg or 240 times that received by the human. The effect on whole body retention produced by a dose difference of similar magnitude for selenium in the rat shows the retention is at least twice as great with the smaller amount. With the development of methods for the collection of data throughout the body representing the fractional distribution of radioactivity versus time, not only can more realistic dose estimates be made, but also the tools will be provided for the study of physiological and biochemical interrelationships in the intact subject from which compartmental models may be made which have diagnostic significance. The unique requirement for quantitative biologic data needed for calculation of radiation absorbed doses is the same as the unique scientific contribution that nuclear medicine can make, which is the quantitative in vivo study of physiologic and biochemical processes. The technique involved is not the same as quantitation of a radionuclide image, but is a step beyond

  4. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    Science.gov (United States)

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  5. A generalized quantitative antibody homeostasis model: maintenance of global antibody equilibrium by effector functions.

    Science.gov (United States)

    Prechl, József

    2017-11-01

    The homeostasis of antibodies can be characterized as a balanced production, target-binding and receptor-mediated elimination regulated by an interaction network, which controls B-cell development and selection. Recently, we proposed a quantitative model to describe how the concentration and affinity of interacting partners generates a network. Here we argue that this physical, quantitative approach can be extended for the interpretation of effector functions of antibodies. We define global antibody equilibrium as the zone of molar equivalence of free antibody, free antigen and immune complex concentrations and of dissociation constant of apparent affinity: [Ab]=[Ag]=[AbAg]= K D . This zone corresponds to the biologically relevant K D range of reversible interactions. We show that thermodynamic and kinetic properties of antibody-antigen interactions correlate with immunological functions. The formation of stable, long-lived immune complexes correspond to a decrease of entropy and is a prerequisite for the generation of higher-order complexes. As the energy of formation of complexes increases, we observe a gradual shift from silent clearance to inflammatory reactions. These rules can also be applied to complement activation-related immune effector processes, linking the physicochemical principles of innate and adaptive humoral responses. Affinity of the receptors mediating effector functions shows a wide range of affinities, allowing the continuous sampling of antibody-bound antigen over the complete range of concentrations. The generation of multivalent, multicomponent complexes triggers effector functions by crosslinking these receptors on effector cells with increasing enzymatic degradation potential. Thus, antibody homeostasis is a thermodynamic system with complex network properties, nested into the host organism by proper immunoregulatory and effector pathways. Maintenance of global antibody equilibrium is achieved by innate qualitative signals modulating a

  6. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    Science.gov (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  7. Non Linear Programming (NLP formulation for quantitative modeling of protein signal transduction pathways.

    Directory of Open Access Journals (Sweden)

    Alexander Mitsos

    Full Text Available Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i excessive CPU time requirements and ii loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  8. Quantitative model of super-Arrhenian behavior in glass forming materials

    Science.gov (United States)

    Caruthers, J. M.; Medvedev, G. A.

    2018-05-01

    The key feature of glass forming liquids is the super-Arrhenian temperature dependence of the mobility, where the mobility can increase by ten orders of magnitude or more as the temperature is decreased if crystallization does not intervene. A fundamental description of the super-Arrhenian behavior has been developed; specifically, the logarithm of the relaxation time is a linear function of 1 /U¯x , where U¯x is the independently determined excess molar internal energy and B is a material constant. This one-parameter mobility model quantitatively describes data for 21 glass forming materials, which are all the materials where there are sufficient experimental data for analysis. The effect of pressure on the loga mobility is also described using the same U¯x(T ,p ) function determined from the difference between the liquid and crystalline internal energies. It is also shown that B is well correlated with the heat of fusion. The prediction of the B /U¯x model is compared to the Adam and Gibbs 1 /T S¯x model, where the B /U¯x model is significantly better in unifying the full complement of mobility data. The implications of the B /U¯x model for the development of a fundamental description of glass are discussed.

  9. Multivariate characterisation and quantitative structure-property relationship modelling of nitroaromatic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, S. [Man-Technology-Environment Research Centre, Department of Natural Sciences, Orebro University, 701 82 Orebro (Sweden)], E-mail: sofie.jonsson@nat.oru.se; Eriksson, L.A. [Department of Natural Sciences and Orebro Life Science Center, Orebro University, 701 82 Orebro (Sweden); Bavel, B. van [Man-Technology-Environment Research Centre, Department of Natural Sciences, Orebro University, 701 82 Orebro (Sweden)

    2008-07-28

    A multivariate model to characterise nitroaromatics and related compounds based on molecular descriptors was calculated. Descriptors were collected from literature and through empirical, semi-empirical and density functional theory-based calculations. Principal components were used to describe the distribution of the compounds in a multidimensional space. Four components described 76% of the variation in the dataset. PC1 separated the compounds due to molecular weight, PC2 separated the different isomers, PC3 arranged the compounds according to different functional groups such as nitrobenzoic acids, nitrobenzenes, nitrotoluenes and nitroesters and PC4 differentiated the compounds containing chlorine from other compounds. Quantitative structure-property relationship models were calculated using partial least squares (PLS) projection to latent structures to predict gas chromatographic (GC) retention times and the distribution between the water phase and air using solid-phase microextraction (SPME). GC retention time was found to be dependent on the presence of polar amine groups, electronic descriptors including highest occupied molecular orbital, dipole moments and the melting point. The model of GC retention time was good, but the precision was not precise enough for practical use. An important environmental parameter was measured using SPME, the distribution between headspace (air) and the water phase. This parameter was mainly dependent on Henry's law constant, vapour pressure, log P, content of hydroxyl groups and atmospheric OH rate constant. The predictive capacity of the model substantially improved when recalculating a model using these five descriptors only.

  10. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    Science.gov (United States)

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.

  11. Excited-state properties from ground-state DFT descriptors: A QSPR approach for dyes.

    Science.gov (United States)

    Fayet, Guillaume; Jacquemin, Denis; Wathelet, Valérie; Perpète, Eric A; Rotureau, Patricia; Adamo, Carlo

    2010-02-26

    This work presents a quantitative structure-property relationship (QSPR)-based approach allowing an accurate prediction of the excited-state properties of organic dyes (anthraquinones and azobenzenes) from ground-state molecular descriptors, obtained within the (conceptual) density functional theory (DFT) framework. The ab initio computation of the descriptors was achieved at several levels of theory, so that the influence of the basis set size as well as of the modeling of environmental effects could be statistically quantified. It turns out that, for the entire data set, a statistically-robust four-variable multiple linear regression based on PCM-PBE0/6-31G calculations delivers a R(adj)(2) of 0.93 associated to predictive errors allowing for rapid and efficient dye design. All the selected descriptors are independent of the dye's family, an advantage over previously designed QSPR schemes. On top of that, the obtained accuracy is comparable to the one of the today's reference methods while exceeding the one of hardness-based fittings. QSPR relationships specific to both families of dyes have also been built up. This work paves the way towards reliable and computationally affordable color design for organic dyes. Copyright 2009 Elsevier Inc. All rights reserved.

  12. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  13. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  14. Quantitative modeling of the reaction/diffusion kinetics of two-chemistry photopolymers

    Science.gov (United States)

    Kowalski, Benjamin Andrew

    Optically driven diffusion in photopolymers is an appealing material platform for a broad range of applications, in which the recorded refractive index patterns serve either as images (e.g. data storage, display holography) or as optical elements (e.g. custom GRIN components, integrated optical devices). A quantitative understanding of the reaction/diffusion kinetics is difficult to obtain directly, but is nevertheless necessary in order to fully exploit the wide array of design freedoms in these materials. A general strategy for characterizing these kinetics is proposed, in which key processes are decoupled and independently measured. This strategy enables prediction of a material's potential refractive index change, solely on the basis of its chemical components. The degree to which a material does not reach this potential reveals the fraction of monomer that has participated in unwanted reactions, reducing spatial resolution and dynamic range. This approach is demonstrated for a model material similar to commercial media, achieving quantitative predictions of index response over three orders of exposure dose (~1 to ~103 mJ cm-2) and three orders of feature size (0.35 to 500 microns). The resulting insights enable guided, rational design of new material formulations with demonstrated performance improvement.

  15. DFT study of zigzag (n, 0) single-walled carbon nanotubes: C-13 NMR chemical shifts

    Czech Academy of Sciences Publication Activity Database

    Kupka, T.; Stachów, M.; Stobinski, L.; Kaminský, Jakub

    2016-01-01

    Roč. 67, Jun (2016), s. 14-19 ISSN 1093-3263 R&D Projects: GA ČR(CZ) GA14-03564S Institutional support: RVO:61388963 Keywords : zigzag SWCNT * cyclacenes * theoretical modeling * DFT * NMR Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.754, year: 2016

  16. DFT calculations on N2O decomposition by binuclear Fe complexes in Fe/ZSM-5

    NARCIS (Netherlands)

    Yakovlev, A.L.; Zhidomirov, G.M.; Santen, van R.A.

    2001-01-01

    N2O decomposition catalyzed by oxidized Fe clusters localized in the micropores of Fe/ZSM-5 has been studied using the DFT approach and a binuclear cluster model of the active site. Three different reaction routes were found, depending on temperature and water pressure. The results show that below

  17. How predictive quantitative modelling of tissue organisation can inform liver disease pathogenesis.

    Science.gov (United States)

    Drasdo, Dirk; Hoehme, Stefan; Hengstler, Jan G

    2014-10-01

    From the more than 100 liver diseases described, many of those with high incidence rates manifest themselves by histopathological changes, such as hepatitis, alcoholic liver disease, fatty liver disease, fibrosis, and, in its later stages, cirrhosis, hepatocellular carcinoma, primary biliary cirrhosis and other disorders. Studies of disease pathogeneses are largely based on integrating -omics data pooled from cells at different locations with spatial information from stained liver structures in animal models. Even though this has led to significant insights, the complexity of interactions as well as the involvement of processes at many different time and length scales constrains the possibility to condense disease processes in illustrations, schemes and tables. The combination of modern imaging modalities with image processing and analysis, and mathematical models opens up a promising new approach towards a quantitative understanding of pathologies and of disease processes. This strategy is discussed for two examples, ammonia metabolism after drug-induced acute liver damage, and the recovery of liver mass as well as architecture during the subsequent regeneration process. This interdisciplinary approach permits integration of biological mechanisms and models of processes contributing to disease progression at various scales into mathematical models. These can be used to perform in silico simulations to promote unravelling the relation between architecture and function as below illustrated for liver regeneration, and bridging from the in vitro situation and animal models to humans. In the near future novel mechanisms will usually not be directly elucidated by modelling. However, models will falsify hypotheses and guide towards the most informative experimental design. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  18. A quantitative model for estimating mean annual soil loss in cultivated land using 137Cs measurements

    International Nuclear Information System (INIS)

    Yang Hao; Zhao Qiguo; Du Mingyuan; Minami, Katsuyuki; Hatta, Tamao

    2000-01-01

    The radioisotope 137 Cs has been widely used to determine rates of cultivated soil loss, Many calibration relationships (including both empirical relationships and theoretical models) have been employed to estimate erosion rates from the amount of 137 Cs lost from the cultivated soil profile. However, there are important limitations which restrict the reliability of these models, which consider only the uniform distribution of 137 Cs in the plough layer and the depth. As a result, erosion rates they may be overestimated or underestimated. This article presents a quantitative model for the relation the amount of 137 Cs lost from the cultivate soil profile and the rate of soil erosion. According to a mass balance model, during the construction of this model we considered the following parameters: the remaining fraction of the surface enrichment layer (F R ), the thickness of the surface enrichment layer (H s ), the depth of the plough layer (H p ), input fraction of the total 137 Cs fallout deposition during a given year t (F t ), radioactive decay of 137 Cs (k), and sampling year (t). The simulation results showed that the amounts of erosion rates estimated using this model were very sensitive to changes in the values of the parameters F R , H s , and H p . We also observed that the relationship between the rate of soil loss and 137 Cs depletion is neither linear nor logarithmic, and is very complex. Although the model is an improvement over existing approaches to derive calibration relationships for cultivated soil, it requires empirical information on local soil properties and the behavior of 137 Cs in the soil profile. There is clearly still a need for more precise information on the latter aspect and, in particular, on the retention of 137 Cs fallout in the top few millimeters of the soil profile and on the enrichment and depletion effects associated with soil redistribution (i.e. for determining accurate values of F R and H s ). (author)

  19. Diffusion-weighted MRI and quantitative biophysical modeling of hippocampal neurite loss in chronic stress.

    Directory of Open Access Journals (Sweden)

    Peter Vestergaard-Poulsen

    Full Text Available Chronic stress has detrimental effects on physiology, learning and memory and is involved in the development of anxiety and depressive disorders. Besides changes in synaptic formation and neurogenesis, chronic stress also induces dendritic remodeling in the hippocampus, amygdala and the prefrontal cortex. Investigations of dendritic remodeling during development and treatment of stress are currently limited by the invasive nature of histological and stereological methods. Here we show that high field diffusion-weighted MRI combined with quantitative biophysical modeling of the hippocampal dendritic loss in 21 day restraint stressed rats highly correlates with former histological findings. Our study strongly indicates that diffusion-weighted MRI is sensitive to regional dendritic loss and thus a promising candidate for non-invasive studies of dendritic plasticity in chronic stress and stress-related disorders.

  20. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    DEFF Research Database (Denmark)

    Han, Xue; Sandels, Claes; Zhu, Kun

    2013-01-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system......, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...... results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles....

  1. The JBEI quantitative metabolic modeling library (jQMM): a python library for modeling microbial metabolism

    DEFF Research Database (Denmark)

    Birkel, Garrett W.; Ghosh, Amit; Kumar, Vinay S.

    2017-01-01

    analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed.Results: The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes......, it introduces the capability to use C-13 labeling experimental data to constrain comprehensive genome-scale models through a technique called two-scale C-13 Metabolic Flux Analysis (2S-C-13 MFA). In addition, the library includes a demonstration of a method that uses proteomics data to produce actionable...... insights to increase biofuel production. Finally, the use of the jQMM library is illustrated through the addition of several Jupyter notebook demonstration files that enhance reproducibility and provide the capability to be adapted to the user's specific needs.Conclusions: jQMM will facilitate the design...

  2. Methods for quantitative measurement of tooth wear using the area and volume of virtual model cusps.

    Science.gov (United States)

    Kim, Soo-Hyun; Park, Young-Seok; Kim, Min-Kyoung; Kim, Sulhee; Lee, Seung-Pyo

    2018-04-01

    Clinicians must examine tooth wear to make a proper diagnosis. However, qualitative methods of measuring tooth wear have many disadvantages. Therefore, this study aimed to develop and evaluate quantitative parameters using the cusp area and volume of virtual dental models. The subjects of this study were the same virtual models that were used in our former study. The same age group classification and new tooth wear index (NTWI) scoring system were also reused. A virtual occlusal plane was generated with the highest cusp points and lowered vertically from 0.2 to 0.8 mm to create offset planes. The area and volume of each cusp was then measured and added together. In addition to the former analysis, the differential features of each cusp were analyzed. The scores of the new parameters differentiated the age and NTWI groups better than those analyzed in the former study. The Spearman ρ coefficients between the total area and the area of each cusp also showed higher scores at the levels of 0.6 mm (0.6A) and 0.8A. The mesiolingual cusp (MLC) showed a statistically significant difference ( P <0.01) from the other cusps in the paired t -test. Additionally, the MLC exhibited the highest percentage of change at 0.6A in some age and NTWI groups. Regarding the age groups, the MLC showed the highest score in groups 1 and 2. For the NTWI groups, the MLC was not significantly different in groups 3 and 4. These results support the proposal that the lingual cusp exhibits rapid wear because it serves as a functional cusp. Although this study has limitations due to its cross-sectional nature, it suggests better quantitative parameters and analytical tools for the characteristics of cusp wear.

  3. Quantitative assessment of biological impact using transcriptomic data and mechanistic network models

    International Nuclear Information System (INIS)

    Thomson, Ty M.; Sewer, Alain; Martin, Florian; Belcastro, Vincenzo; Frushour, Brian P.; Gebel, Stephan; Park, Jennifer; Schlage, Walter K.; Talikka, Marja; Vasilyev, Dmitry M.; Westra, Jurjen W.; Hoeng, Julia; Peitsch, Manuel C.

    2013-01-01

    Exposure to biologically active substances such as therapeutic drugs or environmental toxicants can impact biological systems at various levels, affecting individual molecules, signaling pathways, and overall cellular processes. The ability to derive mechanistic insights from the resulting system responses requires the integration of experimental measures with a priori knowledge about the system and the interacting molecules therein. We developed a novel systems biology-based methodology that leverages mechanistic network models and transcriptomic data to quantitatively assess the biological impact of exposures to active substances. Hierarchically organized network models were first constructed to provide a coherent framework for investigating the impact of exposures at the molecular, pathway and process levels. We then validated our methodology using novel and previously published experiments. For both in vitro systems with simple exposure and in vivo systems with complex exposures, our methodology was able to recapitulate known biological responses matching expected or measured phenotypes. In addition, the quantitative results were in agreement with experimental endpoint data for many of the mechanistic effects that were assessed, providing further objective confirmation of the approach. We conclude that our methodology evaluates the biological impact of exposures in an objective, systematic, and quantifiable manner, enabling the computation of a systems-wide and pan-mechanistic biological impact measure for a given active substance or mixture. Our results suggest that various fields of human disease research, from drug development to consumer product testing and environmental impact analysis, could benefit from using this methodology. - Highlights: • The impact of biologically active substances is quantified at multiple levels. • The systems-level impact integrates the perturbations of individual networks. • The networks capture the relationships between

  4. Validation of Quantitative Structure-Activity Relationship (QSAR Model for Photosensitizer Activity Prediction

    Directory of Open Access Journals (Sweden)

    Sharifuddin M. Zain

    2011-11-01

    Full Text Available Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA method. Based on the method, r2 value, r2 (CV value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 µM to 7.04 µM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set.

  5. GMM - a general microstructural model for qualitative and quantitative studies of smectite clays

    International Nuclear Information System (INIS)

    Pusch, R.; Karnland, O.; Hoekmark, H.

    1990-12-01

    A few years ago an attempt was made to accommodate a number of basic ideas on the fabric and interparticle forces that are assumed to be valid in montmorillonite clay in an integrated microstructural model and this resulted in an SKB report on 'Outlines of models of water and gas flow through smectite clay buffers'. This model gave reasonable agreement between predicted hydraulic conductivity values and actually recorded ones for room temperature and porewater that is poor in electrolytes. The present report describes an improved model that also accounts for effects generated by salt porewater and heating, and that provides a basis for both quantitative determination of transport capacities in a more general way, and also for analysis and prediction of rheological behaviour in bulk. It has been understood very early by investigators in this scientific field that full understanding of the physical state of porewater is asked for in order to make it possible to develop models for clay particle interaction. In particular, a deep insight in the nature of the interlamellar water and of the hydration mechanisms leading to an equilibrium state between the two types of water, and of forcefields in matured smectite clay, requires very qualified multi-discipline research and attempts have been made by the senior author to initiate and coordinate such work in the last 30 years. Despite this effort it has not been possible to get an unanimous understanding of these things but a number of major features have become more clear through the work that we have been able to carry out in the current SKB research work. Thus, NMR studies and precision measurements of the density of porewater as well as comprehensive electron microscopy and rheological testing in combination with application of stochastical mechanics, have led to the hypothetical microstructural model - the GMM - presented in this report. (au)

  6. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling.

    Directory of Open Access Journals (Sweden)

    Lorenza A D'Alessandro

    2015-04-01

    Full Text Available Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF stimulated phosphoinositide-3-kinase (PI3K and mitogen activated protein kinase (MAPK signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks.

  7. Observing Clonal Dynamics across Spatiotemporal Axes: A Prelude to Quantitative Fitness Models for Cancer.

    Science.gov (United States)

    McPherson, Andrew W; Chan, Fong Chun; Shah, Sohrab P

    2018-02-01

    The ability to accurately model evolutionary dynamics in cancer would allow for prediction of progression and response to therapy. As a prelude to quantitative understanding of evolutionary dynamics, researchers must gather observations of in vivo tumor evolution. High-throughput genome sequencing now provides the means to profile the mutational content of evolving tumor clones from patient biopsies. Together with the development of models of tumor evolution, reconstructing evolutionary histories of individual tumors generates hypotheses about the dynamics of evolution that produced the observed clones. In this review, we provide a brief overview of the concepts involved in predicting evolutionary histories, and provide a workflow based on bulk and targeted-genome sequencing. We then describe the application of this workflow to time series data obtained for transformed and progressed follicular lymphomas (FL), and contrast the observed evolutionary dynamics between these two subtypes. We next describe results from a spatial sampling study of high-grade serous (HGS) ovarian cancer, propose mechanisms of disease spread based on the observed clonal mixtures, and provide examples of diversification through subclonal acquisition of driver mutations and convergent evolution. Finally, we state implications of the techniques discussed in this review as a necessary but insufficient step on the path to predictive modelling of disease dynamics. Copyright © 2018 Cold Spring Harbor Laboratory Press; all rights reserved.

  8. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    Science.gov (United States)

    Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.

    2017-10-01

    The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.

  9. Quantitative structure-activity relationship modeling of the toxicity of organothiophosphate pesticides to Daphnia magna and Cyprinus carpio

    NARCIS (Netherlands)

    Zvinavashe, E.; Du, T.; Griff, T.; Berg, van den J.H.J.; Soffers, A.E.M.F.; Vervoort, J.J.M.; Murk, A.J.; Rietjens, I.

    2009-01-01

    Within the REACH regulatory framework in the EU, quantitative structure-activity relationships (QSAR) models are expected to help reduce the number of animals used for experimental testing. The objective of this study was to develop QSAR models to describe the acute toxicity of organothiophosphate

  10. An Overview of the Adaptive Robust DFT

    Directory of Open Access Journals (Sweden)

    Djurović Igor

    2010-01-01

    Full Text Available Abstract This paper overviews basic principles and applications of the robust DFT (RDFT approach, which is used for robust processing of frequency-modulated (FM signals embedded in non-Gaussian heavy-tailed noise. In particular, we concentrate on the spectral analysis and filtering of signals corrupted by impulsive distortions using adaptive and nonadaptive robust estimators. Several adaptive estimators of location parameter are considered, and it is shown that their application is preferable with respect to non-adaptive counterparts. This fact is demonstrated by efficiency comparison of adaptive and nonadaptive RDFT methods for different noise environments.

  11. z-transform DFT filters and FFT's

    DEFF Research Database (Denmark)

    Bruun, G.

    1978-01-01

    The paper shows how discrete Fourier transformation can be implemented as a filter bank in a way which reduces the number of filter coefficients. A particular implementation of such a filter bank is directly related to the normal complex FFT algorithm. The principle developed further leads to types...... of DFT filter banks which utilize a minimum of complex coefficients. These implementations lead to new forms of FFT's, among which is acos/sinFFT for a real signal which only employs real coefficients. The new FFT algorithms use only half as many real multiplications as does the classical FFT....

  12. A quantitative evaluation of multiple biokinetic models using an assembled water phantom: A feasibility study.

    Directory of Open Access Journals (Sweden)

    Da-Ming Yeh

    Full Text Available This study examined the feasibility of quantitatively evaluating multiple biokinetic models and established the validity of the different compartment models using an assembled water phantom. Most commercialized phantoms are made to survey the imaging system since this is essential to increase the diagnostic accuracy for quality assurance. In contrast, few customized phantoms are specifically made to represent multi-compartment biokinetic models. This is because the complicated calculations as defined to solve the biokinetic models and the time-consuming verifications of the obtained solutions are impeded greatly the progress over the past decade. Nevertheless, in this work, five biokinetic models were separately defined by five groups of simultaneous differential equations to obtain the time-dependent radioactive concentration changes inside the water phantom. The water phantom was assembled by seven acrylic boxes in four different sizes, and the boxes were linked to varying combinations of hoses to signify the multiple biokinetic models from the biomedical perspective. The boxes that were connected by hoses were then regarded as a closed water loop with only one infusion and drain. 129.1±24.2 MBq of Tc-99m labeled methylene diphosphonate (MDP solution was thoroughly infused into the water boxes before gamma scanning; then the water was replaced with de-ionized water to simulate the biological removal rate among the boxes. The water was driven by an automatic infusion pump at 6.7 c.c./min, while the biological half-life of the four different-sized boxes (64, 144, 252, and 612 c.c. was 4.8, 10.7, 18.8, and 45.5 min, respectively. The five models of derived time-dependent concentrations for the boxes were estimated either by a self-developed program run in MATLAB or by scanning via a gamma camera facility. Either agreement or disagreement between the practical scanning and the theoretical prediction in five models was thoroughly discussed. The

  13. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    Science.gov (United States)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  14. Establishment of quantitative severity evaluation model for spinal cord injury by metabolomic fingerprinting.

    Directory of Open Access Journals (Sweden)

    Jin Peng

    Full Text Available Spinal cord injury (SCI is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR screening, we identified 15 metabolites that made up an "Eigen-metabolome" capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD-NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use.

  15. Currency risk and prices of oil and petroleum products: a simulation with a quantitative model

    International Nuclear Information System (INIS)

    Aniasi, L.; Ottavi, D.; Rubino, E.; Saracino, A.

    1992-01-01

    This paper analyzes the relationship between the exchange rates of the US Dollar against the four major European currencies and the prices of oil and its main products in those countries. In fact, the sensitivity of the prices to the exchange rate movements is of fundamental importance for the refining and distribution industries of importing countries. The result of the analysis shows that in neither free market conditions, as those present in Great Britain, France and Germany, nor in regulated markets, i.e. the italian one, do the variations of petroleum product prices fully absorb the variation of the exchange rates. In order to assess the above relationship, we first tested the order of co-integration of the time series of exchange rates of EMS currencies with those of international prices of oil and its derivative products; then we used a transfer-function model to reproduce the quantitative relationships between those variables. Using these results, we then reproduced domestic price functions with partial adjustment mechanisms. Finally, we used the above model to run a simulation of the deviation from the steady-state pattern caused by exchange-rate exogenous shocks. 21 refs., 5 figs., 3 tabs

  16. Quantitative modelling of amyloidogenic processing and its influence by SORLA in Alzheimer's disease.

    Science.gov (United States)

    Schmidt, Vanessa; Baum, Katharina; Lao, Angelyn; Rateitschak, Katja; Schmitz, Yvonne; Teichmann, Anke; Wiesner, Burkhard; Petersen, Claus Munck; Nykjaer, Anders; Wolf, Jana; Wolkenhauer, Olaf; Willnow, Thomas E

    2012-01-04

    The extent of proteolytic processing of the amyloid precursor protein (APP) into neurotoxic amyloid-β (Aβ) peptides is central to the pathology of Alzheimer's disease (AD). Accordingly, modifiers that increase Aβ production rates are risk factors in the sporadic form of AD. In a novel systems biology approach, we combined quantitative biochemical studies with mathematical modelling to establish a kinetic model of amyloidogenic processing, and to evaluate the influence by SORLA/SORL1, an inhibitor of APP processing and important genetic risk factor. Contrary to previous hypotheses, our studies demonstrate that secretases represent allosteric enzymes that require cooperativity by APP oligomerization for efficient processing. Cooperativity enables swift adaptive changes in secretase activity with even small alterations in APP concentration. We also show that SORLA prevents APP oligomerization both in cultured cells and in the brain in vivo, eliminating the preferred form of the substrate and causing secretases to switch to a less efficient non-allosteric mode of action. These data represent the first mathematical description of the contribution of genetic risk factors to AD substantiating the relevance of subtle changes in SORLA levels for amyloidogenic processing as proposed for patients carrying SORL1 risk alleles.

  17. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    Science.gov (United States)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  18. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  19. A probabilistic quantitative risk assessment model for the long-term work zone crashes.

    Science.gov (United States)

    Meng, Qiang; Weng, Jinxian; Qu, Xiaobo

    2010-11-01

    Work zones especially long-term work zones increase traffic conflicts and cause safety problems. Proper casualty risk assessment for a work zone is of importance for both traffic safety engineers and travelers. This paper develops a novel probabilistic quantitative risk assessment (QRA) model to evaluate the casualty risk combining frequency and consequence of all accident scenarios triggered by long-term work zone crashes. The casualty risk is measured by the individual risk and societal risk. The individual risk can be interpreted as the frequency of a driver/passenger being killed or injured, and the societal risk describes the relation between frequency and the number of casualties. The proposed probabilistic QRA model consists of the estimation of work zone crash frequency, an event tree and consequence estimation models. There are seven intermediate events--age (A), crash unit (CU), vehicle type (VT), alcohol (AL), light condition (LC), crash type (CT) and severity (S)--in the event tree. Since the estimated value of probability for some intermediate event may have large uncertainty, the uncertainty can thus be characterized by a random variable. The consequence estimation model takes into account the combination effects of speed and emergency medical service response time (ERT) on the consequence of work zone crash. Finally, a numerical example based on the Southeast Michigan work zone crash data is carried out. The numerical results show that there will be a 62% decrease of individual fatality risk and 44% reduction of individual injury risk if the mean travel speed is slowed down by 20%. In addition, there will be a 5% reduction of individual fatality risk and 0.05% reduction of individual injury risk if ERT is reduced by 20%. In other words, slowing down speed is more effective than reducing ERT in the casualty risk mitigation. 2010 Elsevier Ltd. All rights reserved.

  20. Quantitative Phosphoproteomics Reveals Wee1 Kinase as a Therapeutic Target in a Model of Proneural Glioblastoma.

    Science.gov (United States)

    Lescarbeau, Rebecca S; Lei, Liang; Bakken, Katrina K; Sims, Peter A; Sarkaria, Jann N; Canoll, Peter; White, Forest M

    2016-06-01

    Glioblastoma (GBM) is the most common malignant primary brain cancer. With a median survival of about a year, new approaches to treating this disease are necessary. To identify signaling molecules regulating GBM progression in a genetically engineered murine model of proneural GBM, we quantified phosphotyrosine-mediated signaling using mass spectrometry. Oncogenic signals, including phosphorylated ERK MAPK, PI3K, and PDGFR, were found to be increased in the murine tumors relative to brain. Phosphorylation of CDK1 pY15, associated with the G2 arrest checkpoint, was identified as the most differentially phosphorylated site, with a 14-fold increase in phosphorylation in the tumors. To assess the role of this checkpoint as a potential therapeutic target, syngeneic primary cell lines derived from these tumors were treated with MK-1775, an inhibitor of Wee1, the kinase responsible for CDK1 Y15 phosphorylation. MK-1775 treatment led to mitotic catastrophe, as defined by increased DNA damage and cell death by apoptosis. To assess the extensibility of targeting Wee1/CDK1 in GBM, patient-derived xenograft (PDX) cell lines were also treated with MK-1775. Although the response was more heterogeneous, on-target Wee1 inhibition led to decreased CDK1 Y15 phosphorylation and increased DNA damage and apoptosis in each line. These results were also validated in vivo, where single-agent MK-1775 demonstrated an antitumor effect on a flank PDX tumor model, increasing mouse survival by 1.74-fold. This study highlights the ability of unbiased quantitative phosphoproteomics to reveal therapeutic targets in tumor models, and the potential for Wee1 inhibition as a treatment approach in preclinical models of GBM. Mol Cancer Ther; 15(6); 1332-43. ©2016 AACR. ©2016 American Association for Cancer Research.

  1. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D).

    Science.gov (United States)

    van de Streek, Jacco; Neumann, Marcus A

    2014-12-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom.

  2. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D)

    International Nuclear Information System (INIS)

    Streek, Jacco van de; Neumann, Marcus A.

    2014-01-01

    The accuracy of 215 experimental organic crystal structures from powder diffraction data is validated against a dispersion-corrected density functional theory method. In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom

  3. First-principles kinetic Monte Carlo simulations of ammonia oxidation at RuO{sub 2}(110): Selectivity vs. semi-local DFT

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, Claudia [Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin (Germany); Reuter, Karsten [Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin (Germany); Technische Universitaet, Muenchen (Germany)

    2011-07-01

    Reaching a detailed mechanistic understanding of high selectivity in surface catalytic processes is one of the central goals in present-day catalysis research. The Surface Science approach to this problem focuses on the investigation of well-defined model systems that reduce the complexity but still capture the relevant aspects. In this respect, the almost 100% selectivity reported in detailed experiments for the oxidation of NH{sub 3} to NO at RuO{sub 2}(110) presents an ideal benchmark for a quantitative theoretical analysis. To this end we perform detailed kinetic Monte Carlo simulations based on kinetic parameters derived from density-functional theory (DFT). The obtained turnover frequency for molecular nitrogen is in rather good agreement with the experimental data. However, even with an extended set of elementary processes we are not able to reproduce the experimental findings for the production of NO and therewith the selectivity. The central quantities that decisively determine the latter are the binding energy of NO and the N diffusion barrier. Suspecting the approximate energetics obtained with the employed semi-local DFT functional as reason for the discrepancy, we recalculate the kinetic parameters with different functionals and discuss the resulting effects in the kMC simulations.

  4. DEVELOPMENT OF MODEL FOR QUANTITATIVE EVALUATION OF DYNAMICALLY STABLE FORMS OF RIVER CHANNELS

    Directory of Open Access Journals (Sweden)

    O. V. Zenkin

    2017-01-01

    systems. The determination of regularities of development of bed forms and quantitative relations between their parameters are based on modeling the “right” forms of riverbed.The research has resulted in establishing and testing methodology of simulation modeling, which allows one to identify dynamically stable form of riverbed. 

  5. Adaptive DFT-Based Interferometer Fringe Tracking

    Science.gov (United States)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2005-12-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately [InlineEquation not available: see fulltext.] milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  6. Adaptive DFT-Based Interferometer Fringe Tracking

    Directory of Open Access Journals (Sweden)

    Wesley A. Traub

    2005-09-01

    Full Text Available An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms, using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  7. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione

    International Nuclear Information System (INIS)

    Si Hongzong; Wang Tao; Zhang Kejun; Duan Yunbo; Yuan Shuping; Fu Aiping; Hu Zhide

    2007-01-01

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method

  8. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Science.gov (United States)

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  9. Electrocatalytic aerobic epoxidation of alkenes: Experimental and DFT investigation

    International Nuclear Information System (INIS)

    Magdesieva, Tatiana V.; Borisova, Nataliya E.; Dolganov, Alexander V.; Ustynyuk, Yuri A.

    2012-01-01

    A new method for electrocatalytic aerobic epoxidation of alkenes catalyzed by binuclear Cu(II) complexes with azomethine ligands based on 2,6-diformyl-4-tert-butylphenol is described. In acetonitrile–water (5%), at the potential of Cu II /Cu I redox couple (–0.8 V vs. Ag/AgCl/KCl) at room temperature the epoxide is obtained in an average yield of around 50%. Contrary to the majority of known epoxidations, no strong oxidants are involved and no free hydrogen peroxide is formed in the reaction, thus making it ecologically friendly. The DFT quantum-chemical modeling of the reaction mechanism revealed that a copper hydroperoxo-complex rather than hydrogen peroxide or a copper oxo-complex oxidizes alkene. The process is very selective since neither products of hydroxylation of benzene ring in styrene nor of allylic oxidation of cyclohexene were detected.

  10. Interdiffusion of the aluminum magnesium system. Quantitative analysis and numerical model; Interdiffusion des Aluminium-Magnesium-Systems. Quantitative Analyse und numerische Modellierung

    Energy Technology Data Exchange (ETDEWEB)

    Seperant, Florian

    2012-03-21

    Aluminum coatings are a promising approach to protect magnesium alloys against corrosion and thereby making them accessible to a variety of technical applications. Thermal treatment enhances the adhesion of the aluminium coating on magnesium by interdiffusion. For a deeper understanding of the diffusion process at the interface, a quantitative description of the Al-Mg system is necessary. On the basis of diffusion experiments with infinite reservoirs of aluminum and magnesium, the interdiffusion coefficients of the intermetallic phases of the Al-Mg-system are calculated with the Sauer-Freise method for the first time. To solve contradictions in the literature concerning the intrinsic diffusion coefficients, the possibility of a bifurcation of the Kirkendall plane is considered. Furthermore, a physico-chemical description of interdiffusion is provided to interpret the observed phase transitions. The developed numerical model is based on a temporally varied discretization of the space coordinate. It exhibits excellent quantitative agreement with the experimentally measured concentration profile. This confirms the validity of the obtained diffusion coefficients. Moreover, the Kirkendall shift in the Al-Mg system is simulated for the first time. Systems with thin aluminum coatings on magnesium also exhibit a good correlation between simulated and experimental concentration profiles. Thus, the diffusion coefficients are also valid for Al-coated systems. Hence, it is possible to derive parameters for a thermal treatment by simulation, resulting in an optimized modification of the magnesium surface for technical applications.

  11. A quantitative exposure model simulating human norovirus transmission during preparation of deli sandwiches.

    Science.gov (United States)

    Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke

    2015-03-02

    Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The

  12. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  13. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    Science.gov (United States)

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  14. Environmental determinants of tropical forest and savanna distribution: A quantitative model evaluation and its implication

    Science.gov (United States)

    Zeng, Zhenzhong; Chen, Anping; Piao, Shilong; Rabin, Sam; Shen, Zehao

    2014-07-01

    The distributions of tropical ecosystems are rapidly being altered by climate change and anthropogenic activities. One possible trend—the loss of tropical forests and replacement by savannas—could result in significant shifts in ecosystem services and biodiversity loss. However, the influence and the relative importance of environmental factors in regulating the distribution of tropical forest and savanna biomes are still poorly understood, which makes it difficult to predict future tropical forest and savanna distributions in the context of climate change. Here we use boosted regression trees to quantitatively evaluate the importance of environmental predictors—mainly climatic, edaphic, and fire factors—for the tropical forest-savanna distribution at a mesoscale across the tropics (between 15°N and 35°S). Our results demonstrate that climate alone can explain most of the distribution of tropical forest and savanna at the scale considered; dry season average precipitation is the single most important determinant across tropical Asia-Australia, Africa, and South America. Given the strong tendency of increased seasonality and decreased dry season precipitation predicted by global climate models, we estimate that about 28% of what is now tropical forest would likely be lost to savanna by the late 21st century under the future scenario considered. This study highlights the importance of climate seasonality and interannual variability in predicting the distribution of tropical forest and savanna, supporting the climate as the primary driver in the savanna biogeography.

  15. Model-independent quantitative measurement of nanomechanical oscillator vibrations using electron-microscope linescans

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Huan; Fenton, J. C.; Chiatti, O. [London Centre for Nanotechnology, University College London, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Warburton, P. A. [London Centre for Nanotechnology, University College London, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Electronic and Electrical Engineering, University College London, Torrington Place, London WC1E 7JE (United Kingdom)

    2013-07-15

    Nanoscale mechanical resonators are highly sensitive devices and, therefore, for application as highly sensitive mass balances, they are potentially superior to micromachined cantilevers. The absolute measurement of nanoscale displacements of such resonators remains a challenge, however, since the optical signal reflected from a cantilever whose dimensions are sub-wavelength is at best very weak. We describe a technique for quantitative analysis and fitting of scanning-electron microscope (SEM) linescans across a cantilever resonator, involving deconvolution from the vibrating resonator profile using the stationary resonator profile. This enables determination of the absolute amplitude of nanomechanical cantilever oscillations even when the oscillation amplitude is much smaller than the cantilever width. This technique is independent of any model of secondary-electron emission from the resonator and is, therefore, applicable to resonators with arbitrary geometry and material inhomogeneity. We demonstrate the technique using focussed-ion-beam–deposited tungsten cantilevers of radius ∼60–170 nm inside a field-emission SEM, with excitation of the cantilever by a piezoelectric actuator allowing measurement of the full frequency response. Oscillation amplitudes approaching the size of the primary electron-beam can be resolved. We further show that the optimum electron-beam scan speed is determined by a compromise between deflection of the cantilever at low scan speeds and limited spatial resolution at high scan speeds. Our technique will be an important tool for use in precise characterization of nanomechanical resonator devices.

  16. Quantitative analysis of aqueous phase composition of model dentin adhesives experiencing phase separation

    Science.gov (United States)

    Ye, Qiang; Park, Jonggu; Parthasarathy, Ranganathan; Pamatmat, Francis; Misra, Anil; Laurence, Jennifer S.; Marangos, Orestes; Spencer, Paulette

    2013-01-01

    There have been reports of the sensitivity of our current dentin adhesives to excess moisture, for example, water-blisters in adhesives placed on over-wet surfaces, and phase separation with concomitant limited infiltration of the critical dimethacrylate component into the demineralized dentin matrix. To determine quantitatively the hydrophobic/hydrophilic components in the aqueous phase when exposed to over-wet environments, model adhesives were mixed with 16, 33, and 50 wt % water to yield well-separated phases. Based upon high-performance liquid chromatography coupled with photodiode array detection, it was found that the amounts of hydrophobic BisGMA and hydrophobic initiators are less than 0.1 wt % in the aqueous phase. The amount of these compounds decreased with an increase in the initial water content. The major components of the aqueous phase were hydroxyethyl methacrylate (HEMA) and water, and the HEMA content ranged from 18.3 to 14.7 wt %. Different BisGMA homologues and the relative content of these homologues in the aqueous phase have been identified; however, the amount of crosslinkable BisGMA was minimal and, thus, could not help in the formation of a crosslinked polymer network in the aqueous phase. Without the protection afforded by a strong crosslinked network, the poorly photoreactive compounds of this aqueous phase could be leached easily. These results suggest that adhesive formulations should be designed to include hydrophilic multimethacrylate monomers and water compatible initiators. PMID:22331596

  17. Microsegregation in multicomponent alloy analysed by quantitative phase-field model

    International Nuclear Information System (INIS)

    Ohno, M; Takaki, T; Shibuta, Y

    2015-01-01

    Microsegregation behaviour in a ternary alloy system has been analysed by means of quantitative phase-field (Q-PF) simulations with a particular attention directed at an influence of tie-line shift stemming from different liquid diffusivities of the solute elements. The Q-PF model developed for non-isothermal solidification in multicomponent alloys with non-zero solid diffusivities was applied to analysis of microsegregation in a ternary alloy consisting of fast and slow diffusing solute elements. The accuracy of the Q-PF simulation was first verified by performing the convergence test of segregation ratio with respect to the interface thickness. From one-dimensional analysis, it was found that the microsegregation of slow diffusing element is reduced due to the tie-line shift. In two-dimensional simulations, the refinement of microstructure, viz., the decrease of secondary arms spacing occurs at low cooling rates due to the formation of diffusion layer of slow diffusing element. It yields the reductions of degrees of microsegregation for both the fast and slow diffusing elements. Importantly, in a wide range of cooling rates, the degree of microsegregation of the slow diffusing element is always lower than that of the fast diffusing element, which is entirely ascribable to the influence of tie-line shift. (paper)

  18. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models

    International Nuclear Information System (INIS)

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-01-01

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds.

  19. Quantitative assessment of bone defect healing by multidetector CT in a pig model

    International Nuclear Information System (INIS)

    Riegger, Carolin; Kroepil, Patric; Lanzman, Rotem S.; Miese, Falk R.; Antoch, Gerald; Scherer, Axel; Jungbluth, Pascal; Hakimi, Mohssen; Wild, Michael; Hakimi, Ahmad R.

    2012-01-01

    To evaluate multidetector CT volumetry in the assessment of bone defect healing in comparison to histopathological findings in an animal model. In 16 mini-pigs, a circumscribed tibial bone defect was created. Multidetector CT (MDCT) of the tibia was performed on a 64-row scanner 42 days after the operation. The extent of bone healing was estimated quantitatively by MDCT volumetry using a commercially available software programme (syngo Volume, Siemens, Germany).The volume of the entire defect (including all pixels from -100 to 3,000 HU), the nonconsolidated areas (-100 to 500 HU), and areas of osseous consolidation (500 to 3,000 HU) were assessed and the extent of consolidation was calculated. Histomorphometry served as the reference standard. The extent of osseous consolidation in MDCT volumetry ranged from 19 to 92% (mean 65.4 ± 18.5%). There was a significant correlation between histologically visible newly formed bone and the extent of osseous consolidation on MDCT volumetry (r = 0.82, P < 0.0001). A significant negative correlation was detected between osseous consolidation on MDCT and histological areas of persisting defect (r = -0.9, P < 0.0001). MDCT volumetry is a promising tool for noninvasive monitoring of bone healing, showing excellent correlation with histomorphometry. (orig.)

  20. Quantitative assessment of bone defect healing by multidetector CT in a pig model

    Energy Technology Data Exchange (ETDEWEB)

    Riegger, Carolin; Kroepil, Patric; Lanzman, Rotem S.; Miese, Falk R.; Antoch, Gerald; Scherer, Axel [University Duesseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, Duesseldorf (Germany); Jungbluth, Pascal; Hakimi, Mohssen; Wild, Michael [University Duesseldorf, Medical Faculty, Department of Traumatology and Hand Surgery, Duesseldorf (Germany); Hakimi, Ahmad R. [Universtity Duesseldorf, Medical Faculty, Department of Oral Surgery, Duesseldorf (Germany)

    2012-05-15

    To evaluate multidetector CT volumetry in the assessment of bone defect healing in comparison to histopathological findings in an animal model. In 16 mini-pigs, a circumscribed tibial bone defect was created. Multidetector CT (MDCT) of the tibia was performed on a 64-row scanner 42 days after the operation. The extent of bone healing was estimated quantitatively by MDCT volumetry using a commercially available software programme (syngo Volume, Siemens, Germany).The volume of the entire defect (including all pixels from -100 to 3,000 HU), the nonconsolidated areas (-100 to 500 HU), and areas of osseous consolidation (500 to 3,000 HU) were assessed and the extent of consolidation was calculated. Histomorphometry served as the reference standard. The extent of osseous consolidation in MDCT volumetry ranged from 19 to 92% (mean 65.4 {+-} 18.5%). There was a significant correlation between histologically visible newly formed bone and the extent of osseous consolidation on MDCT volumetry (r = 0.82, P < 0.0001). A significant negative correlation was detected between osseous consolidation on MDCT and histological areas of persisting defect (r = -0.9, P < 0.0001). MDCT volumetry is a promising tool for noninvasive monitoring of bone healing, showing excellent correlation with histomorphometry. (orig.)

  1. A two-locus model of spatially varying stabilizing or directional selection on a quantitative trait.

    Science.gov (United States)

    Geroldinger, Ludwig; Bürger, Reinhard

    2014-06-01

    The consequences of spatially varying, stabilizing or directional selection on a quantitative trait in a subdivided population are studied. A deterministic two-locus two-deme model is employed to explore the effects of migration, the degree of divergent selection, and the genetic architecture, i.e., the recombination rate and ratio of locus effects, on the maintenance of genetic variation. The possible equilibrium configurations are determined as functions of the migration rate. They depend crucially on the strength of divergent selection and the genetic architecture. The maximum migration rates are investigated below which a stable fully polymorphic equilibrium or a stable single-locus polymorphism can exist. Under stabilizing selection, but with different optima in the demes, strong recombination may facilitate the maintenance of polymorphism. However usually, and in particular with directional selection in opposite direction, the critical migration rates are maximized by a concentrated genetic architecture, i.e., by a major locus and a tightly linked minor one. Thus, complementing previous work on the evolution of genetic architectures in subdivided populations subject to diversifying selection, it is shown that concentrated architectures may aid the maintenance of polymorphism. Conditions are obtained when this is the case. Finally, the dependence of the phenotypic variance, linkage disequilibrium, and various measures of local adaptation and differentiation on the parameters is elaborated. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  2. A quantitative risk assessment model to evaluate effective border control measures for rabies prevention

    Science.gov (United States)

    Weng, Hsin-Yi; Wu, Pei-I; Yang, Ping-Cheng; Tsai, Yi-Lun; Chang, Chao-Chin

    2009-01-01

    Border control is the primary method to prevent rabies emergence. This study developed a quantitative risk model incorporating stochastic processes to evaluate whether border control measures could efficiently prevent rabies introduction through importation of cats and dogs using Taiwan as an example. Both legal importation and illegal smuggling were investigated. The impacts of reduced quarantine and/or waiting period on the risk of rabies introduction were also evaluated. The results showed that Taiwan’s current animal importation policy could effectively prevent rabies introduction through legal importation of cats and dogs. The median risk of a rabid animal to penetrate current border control measures and enter Taiwan was 5.33 × 10−8 (95th percentile: 3.20 × 10−7). However, illegal smuggling may pose Taiwan to the great risk of rabies emergence. Reduction of quarantine and/or waiting period would affect the risk differently, depending on the applied assumptions, such as increased vaccination coverage, enforced custom checking, and/or change in number of legal importations. Although the changes in the estimated risk under the assumed alternatives were not substantial except for completely abolishing quarantine, the consequences of rabies introduction may yet be considered to be significant in a rabies-free area. Therefore, a comprehensive benefit-cost analysis needs to be conducted before recommending these alternative measures. PMID:19822125

  3. A DFT+nonhomogeneous DMFT approach for finite systems

    International Nuclear Information System (INIS)

    Kabir, Alamgir; Turkowski, Volodymyr; Rahman, Talat S

    2015-01-01

    For reliable and efficient inclusion of electron–electron correlation effects in nanosystems we formulate a combined density functional theory/nonhomogeneous dynamical mean-field theory (DFT+DMFT) approach which employs an approximate iterated perturbation theory impurity solver. We further apply the method to examine the size-dependent magnetic properties of iron nanoparticles containing 11–100 atoms. We show that for the majority of clusters the DFT+DMFT solution is in very good agreement with experimental data, much better compared to the DFT and DFT+U results. In particular, it reproduces the oscillations in magnetic moment with size as observed experimentally. We thus demonstrate that the DFT+DMFT approach can be used for accurate and realistic description of nanosystems containing about hundred atoms. (paper)

  4. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    Science.gov (United States)

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  5. Spectroscopic and DFT Study of RhIII Chloro Complex Transformation in Alkaline Solutions.

    Science.gov (United States)

    Vasilchenko, Danila B; Berdyugin, Semen N; Korenev, Sergey V; O'Kennedy, Sean; Gerber, Wilhelmus J

    2017-09-05

    The hydrolysis of [RhCl 6 ] 3- in NaOH-water solutions was studied by spectrophotometric methods. The reaction proceeds via successive substitution of chloride with hydroxide to quantitatively form [Rh(OH) 6 ] 3- . Ligand substitution kinetics was studied in an aqueous 0.434-1.085 M NaOH matrix in the temperature range 5.5-15.3 °C. Transformation of [RhCl 6 ] 3- into [RhCl 5 (OH)] 3- was found to be the rate-determining step with activation parameters of ΔH † = 105 ± 4 kJ mol -1 and ΔS † = 59 ± 10 J K -1 mol -1 . The coordinated hydroxo ligand(s) induces rapid ligand substitution to form [Rh(OH) 6 ] 3- . By simulating ligand substitution as a dissociative mechanism, using density functional theory (DFT), we can now explain the relatively fast and slow kinetics of chloride substitution in basic and acidic matrices, respectively. Moreover, the DFT calculated activation energies corroborated experimental data that the kinetic stereochemical sequence of [RhCl 6 ] 3- hydrolysis in an acidic solution proceeds as [RhCl 6 ] 3- → [RhCl 5 (H 2 O)] 2- → cis-[RhCl 4 (H 2 O) 2 ] - . However, DFT calculations predict in a basic solution the trans route of substitution [RhCl 6 ] 3- → [RhCl 5 (OH)] 3- → trans-[RhCl 4 (OH) 2 ] 3- is kinetically favored.

  6. Quantitative Validation of the Integrated Medical Model (IMM) for ISS Missions

    Science.gov (United States)

    Young, Millennia; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    Lifetime Surveillance of Astronaut Health (LSAH) provided observed medical event data on 33 ISS and 111 STS person-missions for use in further improving and validating the Integrated Medical Model (IMM). Using only the crew characteristics from these observed missions, the newest development version, IMM v4.0, will simulate these missions to predict medical events and outcomes. Comparing IMM predictions to the actual observed medical event counts will provide external validation and identify areas of possible improvement. In an effort to improve the power of detecting differences in this validation study, the total over each program ISS and STS will serve as the main quantitative comparison objective, specifically the following parameters: total medical events (TME), probability of loss of crew life (LOCL), and probability of evacuation (EVAC). Scatter plots of observed versus median predicted TMEs (with error bars reflecting the simulation intervals) will graphically display comparisons while linear regression will serve as the statistical test of agreement. Two scatter plots will be analyzed 1) where each point reflects a mission and 2) where each point reflects a condition-specific total number of occurrences. The coefficient of determination (R2) resulting from a linear regression with no intercept bias (intercept fixed at zero) will serve as an overall metric of agreement between IMM and the real world system (RWS). In an effort to identify as many possible discrepancies as possible for further inspection, the -level for all statistical tests comparing IMM predictions to observed data will be set to 0.1. This less stringent criterion, along with the multiple testing being conducted, should detect all perceived differences including many false positive signals resulting from random variation. The results of these analyses will reveal areas of the model requiring adjustment to improve overall IMM output, which will thereby provide better decision support for

  7. MODIS volcanic ash retrievals vs FALL3D transport model: a quantitative comparison

    Science.gov (United States)

    Corradini, S.; Merucci, L.; Folch, A.

    2010-12-01

    Satellite retrievals and transport models represents the key tools to monitor the volcanic clouds evolution. Because of the harming effects of fine ash particles on aircrafts, the real-time tracking and forecasting of volcanic clouds is key for aviation safety. Together with the security reasons also the economical consequences of a disruption of airports must be taken into account. The airport closures due to the recent Icelandic Eyjafjöll eruption caused millions of passengers to be stranded not only in Europe, but across the world. IATA (the International Air Transport Association) estimates that the worldwide airline industry has lost a total of about 2.5 billion of Euro during the disruption. Both security and economical issues require reliable and robust ash cloud retrievals and trajectory forecasting. The intercomparison between remote sensing and modeling is required to assure precise and reliable volcanic ash products. In this work we perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands operating in the VIS-TIR spectral range and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 micron have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. Three MODIS images collected the October 28, 29 and 30 on Mt. Etna volcano during the 2002 eruption have been considered as test cases. The results show a general good agreement between the retrieved and the modeled volcanic clouds in the first 300 km from the vents. Even if the

  8. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  9. Quantitative acid-base physiology using the Stewart model. Does it improve our understanding of what is really wrong?

    NARCIS (Netherlands)

    Derksen, R.; Scheffer, G.J.; Hoeven, J.G. van der

    2006-01-01

    Traditional theories of acid-base balance are based on the Henderson-Hasselbalch equation to calculate proton concentration. The recent revival of quantitative acid-base physiology using the Stewart model has increased our understanding of complicated acid-base disorders, but has also led to several

  10. Physically based dynamic run-out modelling for quantitative debris flow risk assessment: a case study in Tresenda, northern Italy

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.

    2014-01-01

    Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014

  11. A Quantitative Study of Faculty Perceptions and Attitudes on Asynchronous Virtual Teamwork Using the Technology Acceptance Model

    Science.gov (United States)

    Wolusky, G. Anthony

    2016-01-01

    This quantitative study used a web-based questionnaire to assess the attitudes and perceptions of online and hybrid faculty towards student-centered asynchronous virtual teamwork (AVT) using the technology acceptance model (TAM) of Davis (1989). AVT is online student participation in a team approach to problem-solving culminating in a written…

  12. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    Science.gov (United States)

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  13. Quantitative predictions from competition theory with incomplete information on model parameters tested against experiments across diverse taxa

    OpenAIRE

    Fort, Hugo

    2017-01-01

    We derive an analytical approximation for making quantitative predictions for ecological communities as a function of the mean intensity of the inter-specific competition and the species richness. This method, with only a fraction of the model parameters (carrying capacities and competition coefficients), is able to predict accurately empirical measurements covering a wide variety of taxa (algae, plants, protozoa).

  14. Using ISOS consensus test protocols for development of quantitative life test models in ageing of organic solar cells

    DEFF Research Database (Denmark)

    Kettle, J.; Stoichkov, V.; Kumar, D.

    2017-01-01

    As Organic Photovoltaic (OPV) development matures, the demand grows for rapid characterisation of degradation and application of Quantitative Accelerated Life Tests (QALT) models to predict and improve reliability. To date, most accelerated testing on OPVs has been conducted using ISOS consensus...

  15. Quantitative trait loci affecting phenotypic variation in the vacuolated lens mouse mutant, a multigenic mouse model of neural tube defects

    NARCIS (Netherlands)

    Korstanje, Ron; Desai, Jigar; Lazar, Gloria; King, Benjamin; Rollins, Jarod; Spurr, Melissa; Joseph, Jamie; Kadambi, Sindhuja; Li, Yang; Cherry, Allison; Matteson, Paul G.; Paigen, Beverly; Millonig, James H.

    Korstanje R, Desai J, Lazar G, King B, Rollins J, Spurr M, Joseph J, Kadambi S, Li Y, Cherry A, Matteson PG, Paigen B, Millonig JH. Quantitative trait loci affecting phenotypic variation in the vacuolated lens mouse mutant, a multigenic mouse model of neural tube defects. Physiol Genomics 35:

  16. Model development for quantitative evaluation of nuclear fuel cycle alternatives and its application

    International Nuclear Information System (INIS)

    Ko, Won Il

    2000-02-01

    This study addresses the quantitative evaluation of the proliferation resistance and the economics which are important factors of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles, and a fuel cycle cost analysis model was suggested to incorporate various uncertainties in the fuel cycle cost calculation. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. In this model, the proliferation resistance was described an a relative size of the barrier that must be overcome in order to acquire nuclear weapons. Therefore, a larger barriers means that the risk of failure is great, expenditure of resources is large and the time scales for implementation is long. The electromotive force was expressed as the political motivation of the potential proliferators, such as an unauthorized party or a national group to acquire nuclear weapons. The electrical current was then defined as a proliferation resistance index. There are two electrical circuit models used in the evaluation of the proliferation resistance: the series and the parallel circuits. In the series circuit model of the proliferation resistance, a potential proliferator has to overcome all resistance barriers to achieve the manufacturing of the nuclear weapons. This phenomenon could be explained by the fact that the IAEA(International Atomic Energy Agency)'s safeguards philosophy relies on the defense-in-depth principle against nuclear proliferation at a specific facility. The parallel circuit model was also used to imitate the risk of proliferation for

  17. Quantitative modeling of clinical, cellular, and extracellular matrix variables suggest prognostic indicators in cancer: a model in neuroblastoma.

    Science.gov (United States)

    Tadeo, Irene; Piqueras, Marta; Montaner, David; Villamón, Eva; Berbegall, Ana P; Cañete, Adela; Navarro, Samuel; Noguera, Rosa

    2014-02-01

    Risk classification and treatment stratification for cancer patients is restricted by our incomplete picture of the complex and unknown interactions between the patient's organism and tumor tissues (transformed cells supported by tumor stroma). Moreover, all clinical factors and laboratory studies used to indicate treatment effectiveness and outcomes are by their nature a simplification of the biological system of cancer, and cannot yet incorporate all possible prognostic indicators. A multiparametric analysis on 184 tumor cylinders was performed. To highlight the benefit of integrating digitized medical imaging into this field, we present the results of computational studies carried out on quantitative measurements, taken from stromal and cancer cells and various extracellular matrix fibers interpenetrated by glycosaminoglycans, and eight current approaches to risk stratification systems in patients with primary and nonprimary neuroblastoma. New tumor tissue indicators from both fields, the cellular and the extracellular elements, emerge as reliable prognostic markers for risk stratification and could be used as molecular targets of specific therapies. The key to dealing with personalized therapy lies in the mathematical modeling. The use of bioinformatics in patient-tumor-microenvironment data management allows a predictive model in neuroblastoma.

  18. Application of non-quantitative modelling in the analysis of a network warfare environment

    CSIR Research Space (South Africa)

    Veerasamy, N

    2008-07-01

    Full Text Available based on the use of secular associations, chronological origins, linked concepts, categorizations and context specifications. This paper proposes the use of non-quantitative methods through a morphological analysis to better explore and define...

  19. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  20. A bibliography of terrain modeling (geomorphometry), the quantitative representation of topography: supplement 4.0

    Science.gov (United States)

    Pike, Richard J.

    2002-01-01

    Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement

  1. Statistical Modeling Approach to Quantitative Analysis of Interobserver Variability in Breast Contouring

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jinzhong, E-mail: jyang4@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhang, Lifei; Balter, Peter; Court, Laurence E. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, X. Allen [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Dong, Lei [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Scripps Proton Therapy Center, San Diego, California (United States)

    2014-05-01

    Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively

  2. 3D vs 2D laparoscopic systems: Development of a performance quantitative validation model.

    Science.gov (United States)

    Ghedi, Andrea; Donarini, Erica; Lamera, Roberta; Sgroi, Giovanni; Turati, Luca; Ercole, Cesare

    2015-01-01

    The new technology ensures 3D laparoscopic vision by adding depth to the traditional two dimensions. This realistic vision gives the surgeon the feeling of operating in real space. Hospital of Treviglio-Caravaggio isn't an university or scientific institution; in 2014 a new 3D laparoscopic technology was acquired therefore it led to evaluation of the of the appropriateness in term of patient outcome and safety. The project aims at achieving the development of a quantitative validation model that would ensure low cost and a reliable measure of the performance of 3D technology versus 2D mode. In addition, it aims at demonstrating how new technologies, such as open source hardware and software and 3D printing, could help research with no significant cost increase. For these reasons, in order to define criteria of appropriateness in the use of 3D technologies, it was decided to perform a study to technically validate the use of the best technology in terms of effectiveness, efficiency and safety in the use of a system between laparoscopic vision in 3D and the traditional 2D. 30 surgeons were enrolled in order to perform an exercise through the use of laparoscopic forceps inside a trainer. The exercise consisted of having surgeons with different level of seniority, grouped by type of specialization (eg. surgery, urology, gynecology), exercising videolaparoscopy with two technologies (2D and 3D) through the use of a anthropometric phantom. The target assigned to the surgeon was that to pass "needle and thread" without touching the metal part in the shortest time possible. The rings selected for the exercise had each a coefficient of difficulty determined by depth, diameter, angle from the positioning and from the point of view. The analysis of the data collected from the above exercise has mathematically confirmed that the 3D technique ensures a learning curve lower in novice and greater accuracy in the performance of the task with respect to 2D.

  3. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  4. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  5. Benchmarking DFT and TD-DFT Functionals for the Ground and Excited States of Hydrogen-Rich Peptide Radicals.

    Science.gov (United States)

    Riffet, Vanessa; Jacquemin, Denis; Cauët, Emilie; Frison, Gilles

    2014-08-12

    We assess the pros and cons of a large panel of DFT exchange-correlation functionals for the prediction of the electronic structure of hydrogen-rich peptide radicals formed after electron attachment on a protonated peptide. Indeed, despite its importance in the understanding of the chemical changes associated with the reduction step, the question of the attachment site of an electron and, more generally, of the reduced species formed in the gas phase through electron-induced dissociation (ExD) processes in mass spectrometry is still a matter of debate. For hydrogen-rich peptide radicals in which several positive groups and low-lying π* orbitals can capture the incoming electron in ExD, inclusion of full Hartree-Fock exchange at long-range interelectronic distance is a prerequisite for an accurate description of the electronic states, thereby excluding several popular exchange-correlation functionals, e.g., B3LYP, M06-2X, or CAM-B3LYP. However, we show that this condition is not sufficient by comparing the results obtained with asymptotically correct range-separated hybrids (M11, LC-BLYP, LC-BPW91, ωB97, ωB97X, and ωB97X-D) and with reference CASSCF-MRCI and EOM-CCSD calculations. The attenuation parameter ω significantly tunes the spin density distribution and the excited states vertical energies. The investigated model structures, ranging from methylammonium to hexapeptide, allow us to obtain a description of the nature and energy of the electronic states, depending on (i) the presence of hydrogen bond(s) around the cationic site(s), (ii) the presence of π* molecular orbitals (MOs), and (iii) the selected DFT approach. It turns out that, in the present framework, LC-BLYP and ωB97 yields the most accurate results.

  6. DFT studies of fluid-minerals interactions at the molecular level: examples and perspectives; Etudes DFT des interactions fluides-mineraux a l'echelle moleculaire: exemples et perspectives

    Energy Technology Data Exchange (ETDEWEB)

    Toulhoat, H.; Digne, M.; Arrouvel, C.; Raybaud, P. [Institut Francais du Petrole (IFP), 92 - Rueil-Malmaison (France)

    2005-03-15

    The scope of applications of first-principle theoretical chemistry methods has been vastly expanded over the past years due to the combination of improved methods and algorithms for solving the poly-electronic Schroedinger equation with exponential growth of computer power available at constant cost (the so-called 'Moore law'). In particular, atomistic studies of solid-fluid interfaces are now routinely producing new qualitative and quantitative insights into adsorption, surface speciation as a function of the prevailing chemical potentials, and reactivity of surface species. This approach is currently widely exploited in the fields of heterogeneous catalysis and surface physics, and so far to a lesser extent for geochemical purposes, although the situation is rapidly evolving. Many fundamental issues of fluid-minerals interaction phenomena can indeed be addressed ab initio with atomistic 3D periodic models of fluid-solid interfaces involving up to 200-300 un-equivalent atoms. We illustrate this proposal with recent IFP results, some of which are of primary interest with respect to the manufacture of catalysts supports, but which also show some relevance for inorganic geochemical issues in the context of the sequestration of acid gases in subsurface porous rocks: - reactive wetting of boehmite AlOOH and morphology prediction; - acid-basic surface properties of a transition alumina; - hydroxylation and sulfhydrylation of anatase TiO{sub 2} surfaces. Through these examples, the performances of DFT and a variety of up-to-date modeling techniques and strategies are discussed. (authors)

  7. Quantitative measurements and modeling of cargo–motor interactions during fast transport in the living axon

    International Nuclear Information System (INIS)

    Seamster, Pamela E; Loewenberg, Michael; Pascal, Jennifer; Chauviere, Arnaud; Gonzales, Aaron; Cristini, Vittorio; Bearer, Elaine L

    2012-01-01

    The kinesins have long been known to drive microtubule-based transport of sub-cellular components, yet the mechanisms of their attachment to cargo remain a mystery. Several different cargo-receptors have been proposed based on their in vitro binding affinities to kinesin-1. Only two of these—phosphatidyl inositol, a negatively charged lipid, and the carboxyl terminus of the amyloid precursor protein (APP-C), a trans-membrane protein—have been reported to mediate motility in living systems. A major question is how these many different cargo, receptors and motors interact to produce the complex choreography of vesicular transport within living cells. Here we describe an experimental assay that identifies cargo–motor receptors by their ability to recruit active motors and drive transport of exogenous cargo towards the synapse in living axons. Cargo is engineered by derivatizing the surface of polystyrene fluorescent nanospheres (100 nm diameter) with charged residues or with synthetic peptides derived from candidate motor receptor proteins, all designed to display a terminal COOH group. After injection into the squid giant axon, particle movements are imaged by laser-scanning confocal time-lapse microscopy. In this report we compare the motility of negatively charged beads with APP-C beads in the presence of glycine-conjugated non-motile beads using new strategies to measure bead movements. The ensuing quantitative analysis of time-lapse digital sequences reveals detailed information about bead movements: instantaneous and maximum velocities, run lengths, pause frequencies and pause durations. These measurements provide parameters for a mathematical model that predicts the spatiotemporal evolution of distribution of the two different types of bead cargo in the axon. The results reveal that negatively charged beads differ from APP-C beads in velocity and dispersion, and predict that at long time points APP-C will achieve greater progress towards the presynaptic

  8. The Comparison of Distributed P2P Trust Models Based on Quantitative Parameters in the File Downloading Scenarios

    Directory of Open Access Journals (Sweden)

    Jingpei Wang

    2016-01-01

    Full Text Available Varied P2P trust models have been proposed recently; it is necessary to develop an effective method to evaluate these trust models to resolve the commonalities (guiding the newly generated trust models in theory and individuality (assisting a decision maker in choosing an optimal trust model to implement in specific context issues. A new method for analyzing and comparing P2P trust models based on hierarchical parameters quantization in the file downloading scenarios is proposed in this paper. Several parameters are extracted from the functional attributes and quality feature of trust relationship, as well as requirements from the specific network context and the evaluators. Several distributed P2P trust models are analyzed quantitatively with extracted parameters modeled into a hierarchical model. The fuzzy inferring method is applied to the hierarchical modeling of parameters to fuse the evaluated values of the candidate trust models, and then the relative optimal one is selected based on the sorted overall quantitative values. Finally, analyses and simulation are performed. The results show that the proposed method is reasonable and effective compared with the previous algorithms.

  9. Quantitative Finance

    Science.gov (United States)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  10. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  11. Study on quantitative risk assessment model of the third party damage for natural gas pipelines based on fuzzy comprehensive assessment

    International Nuclear Information System (INIS)

    Qiu, Zeyang; Liang, Wei; Lin, Yang; Zhang, Meng; Wang, Xue

    2017-01-01

    As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor. (paper)

  12. A quantitative modeling of the contributions of localized surface plasmon resonance and interband transitions to absorbance of gold nanoparticles

    International Nuclear Information System (INIS)

    Zhu, S.; Chen, T. P.; Liu, Y. C.; Liu, Y.; Fung, S.

    2012-01-01

    A quantitative modeling of the contributions of localized surface plasmon resonance (LSPR) and interband transitions to absorbance of gold nanoparticles has been achieved based on Lorentz–Drude dispersion function and Maxwell-Garnett effective medium approximation. The contributions are well modeled with three Lorentz oscillators. Influence of the structural properties of the gold nanoparticles on the LSPR and interband transitions has been examined. In addition, the dielectric function of the gold nanoparticles has been extracted from the modeling to absorbance, and it is found to be consistent with the result yielded from the spectroscopic ellipsometric analysis.

  13. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system.

    Science.gov (United States)

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.

  14. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system

    Directory of Open Access Journals (Sweden)

    Annie eLumen

    2015-05-01

    Full Text Available A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local

  15. Economic analysis of light brown apple moth using GIS and quantitative modeling

    Science.gov (United States)

    Glenn Fowler; Lynn Garrett; Alison Neeley; Roger Magarey; Dan Borchert; Brian. Spears

    2011-01-01

    We conducted an economic analysis of the light brown apple moth (LBAM), (piphyas postvittana (Walker)), whose presence in California has resulted in a regulatory program. Our objective was to quantitatively characterize the economic costs to apple, grape, orange, and pear crops that would result from LBAM's introduction into the continental...

  16. Modeling optical behavior of birefringent biological tissues for evaluation of quantitative polarized light microscopy

    NARCIS (Netherlands)

    Turnhout, van M.C.; Kranenbarg, S.; Leeuwen, van J.L.

    2009-01-01

    Quantitative polarized light microscopy (qPLM) is a popular tool for the investigation of birefringent architectures in biological tissues. Collagen, the most abundant protein in mammals, is such a birefringent material. Interpretation of results of qPLM in terms of collagen network architecture and

  17. Toward quantitative prediction of charge mobility in organic semiconductors: tunneling enabled hopping model.

    Science.gov (United States)

    Geng, Hua; Peng, Qian; Wang, Linjun; Li, Haijiao; Liao, Yi; Ma, Zhiying; Shuai, Zhigang

    2012-07-10

    A tunneling-enabled hopping mechanism is proposed, providing a pratical tool to quantitatively assess charge mobility in organic semiconductors. The paradoxical phenomena in TIPS-pentacene is well explained in that the optical probe indicates localized charges while transport measurements show bands of charge. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Science.gov (United States)

    2011-05-18

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software...: The Nuclear Regulatory Commission has issued for public comment a document entitled: NUREG/CR-XXXX...-XXXX is available electronically under ADAMS Accession Number ML111020087. Federal Rulemaking Web Site...

  19. A Quantitative Comparative Study of Blended and Traditional Models in the Secondary Advanced Placement Statistics Classroom

    Science.gov (United States)

    Owens, Susan T.

    2017-01-01

    Technology is becoming an integral tool in the classroom and can make a positive impact on how the students learn. This quantitative comparative research study examined gender-based differences among secondary Advanced Placement (AP) Statistic students comparing Educational Testing Service (ETS) College Board AP Statistic examination scores…

  20. nmr spectroscopic study and dft calculations of vibrational analyses

    African Journals Online (AJOL)

    Preferred Customer

    2Plant, Drug and Scientific Research Centre, Anadolu University, 26470, ... Density functional theory (DFT) calculations provide excellent agreement with ..... simple correlation between 1JCH and the hybridization of the carbon atom involved; ...

  1. Redox Potentials of Ligands and Complexes – a DFT Approach

    African Journals Online (AJOL)

    NICO

    Electron affinity (EA) of an atom or molecule is the associated energy change that occurs .... As a consequence of the foregoing evidence we resolved to embark on a ... Density functional theory (DFT) calculations were performed using the ...

  2. Quantitative evaluation of the risk induced by dominant geomorphological processes on different land uses, based on GIS spatial analysis models

    Science.gov (United States)

    Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea

    2017-12-01

    Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.

  3. Quantitative evaluation of the risk induced by dominant geomorphological processes on different land uses, based on GIS spatial analysis models

    Science.gov (United States)

    Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea

    2018-06-01

    Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.

  4. Electronic spectroscopy of HRe(CO) 5: a CASSCF/CASPT2 and TD-DFT study

    Science.gov (United States)

    Bossert, J.; Ben Amor, N.; Strich, A.; Daniel, C.

    2001-07-01

    The low-lying excited states of HRe(CO) 5 have been calculated at the CASSCF/CASPT2 and TD-DFT level of theory using relativistic effective core potentials (ECP) or ab initio model potentials (AIMP). The theoretical absorption spectrum is compared to the experimental one. Despite the similarity between the experimental absorption spectra of HMn(CO) 5 and HRe(CO) 5 in the UV/visible energy domain it is shown that the assignment differs significantly between the two molecules. The low-lying excited states of HRe(CO) 5 correspond to 5d→π *CO excitations whereas the spectrum of HMn(CO) 5 consists mainly of 3d→3d and 3d→ σ*Mn-H excitations. If the CASPT2 and TD-DFT results are quite comparable for the lowest excited states, the upper part assignment is more problematic with the TD-DFT method.

  5. Qualitative and quantitative combined nonlinear dynamics model and its application in analysis of price, supply–demand ratio and selling rate

    International Nuclear Information System (INIS)

    Zhu, Dingju

    2016-01-01

    The qualitative and quantitative combined nonlinear dynamics model proposed in this paper fill the gap in nonlinear dynamics model in terms of qualitative and quantitative combined methods, allowing the qualitative model and quantitative model to perfectly combine and overcome their weaknesses by learning from each other. These two types of models use their strengths to make up for the other’s deficiencies. The qualitative and quantitative combined models can surmount the weakness that the qualitative model cannot be applied and verified in a quantitative manner, and the high costs and long time of multiple construction as well as verification of the quantitative model. The combined model is more practical and efficient, which is of great significance for nonlinear dynamics. The qualitative and quantitative combined modeling and model analytical method raised in this paper is not only applied to nonlinear dynamics, but can be adopted and drawn on in the modeling and model analysis of other fields. Additionally, the analytical method of qualitative and quantitative combined nonlinear dynamics model proposed in this paper can satisfactorily resolve the problems with the price system’s existing nonlinear dynamics model analytical method. The three-dimensional dynamics model of price, supply–demand ratio and selling rate established in this paper make estimates about the best commodity prices using the model results, thereby providing a theoretical basis for the government’s macro-control of price. Meanwhile, this model also offer theoretical guidance to how to enhance people’s purchasing power and consumption levels through price regulation and hence to improve people’s living standards.

  6. Redesign of the DFT/MRCI Hamiltonian

    Energy Technology Data Exchange (ETDEWEB)

    Lyskov, Igor; Kleinschmidt, Martin; Marian, Christel M., E-mail: Christel.Marian@hhu.de [Institute of Theoretical and Computational Chemistry, Heinrich-Heine-University Düsseldorf, Universitätsstraße 1, 40225 Düsseldorf (Germany)

    2016-01-21

    The combined density functional theory and multireference configuration interaction (DFT/MRCI) method of Grimme and Waletzke [J. Chem. Phys. 111, 5645 (1999)] is a well-established semi-empirical quantum chemical method for efficiently computing excited-state properties of organic molecules. As it turns out, the method fails to treat bi-chromophores owing to the strong dependence of the parameters on the excitation class. In this work, we present an alternative form of correcting the matrix elements of a MRCI Hamiltonian which is built from a Kohn-Sham set of orbitals. It is based on the idea of constructing individual energy shifts for each of the state functions of a configuration. The new parameterization is spin-invariant and incorporates less empirism compared to the original formulation. By utilizing damping techniques together with an algorithm of selecting important configurations for treating static electron correlation, the high computational efficiency has been preserved. The robustness of the original and redesigned Hamiltonians has been tested on experimentally known vertical excitation energies of organic molecules yielding similar statistics for the two parameterizations. Besides that, our new formulation is free from artificially low-lying doubly excited states, producing qualitatively correct and consistent results for excimers. The way of modifying matrix elements of the MRCI Hamiltonian presented here shall be considered as default choice when investigating photophysical processes of bi-chromophoric systems such as singlet fission or triplet-triplet upconversion.

  7. [Influence of Spectral Pre-Processing on PLS Quantitative Model of Detecting Cu in Navel Orange by LIBS].

    Science.gov (United States)

    Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui

    2015-05-01

    Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.

  8. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  9. Novel Uses of In Vitro Data to Develop Quantitative Biological Activity Relationship Models for in Vivo Carcinogenicity Prediction.

    Science.gov (United States)

    Pradeep, Prachi; Povinelli, Richard J; Merrill, Stephen J; Bozdag, Serdar; Sem, Daniel S

    2015-04-01

    The availability of large in vitro datasets enables better insight into the mode of action of chemicals and better identification of potential mechanism(s) of toxicity. Several studies have shown that not all in vitro assays can contribute as equal predictors of in vivo carcinogenicity for development of hybrid Quantitative Structure Activity Relationship (QSAR) models. We propose two novel approaches for the use of mechanistically relevant in vitro assay data in the identification of relevant biological descriptors and development of Quantitative Biological Activity Relationship (QBAR) models for carcinogenicity prediction. We demonstrate that in vitro assay data can be used to develop QBAR models for in vivo carcinogenicity prediction via two case studies corroborated with firm scientific rationale. The case studies demonstrate the similarities between QBAR and QSAR modeling in: (i) the selection of relevant descriptors to be used in the machine learning algorithm, and (ii) the development of a computational model that maps chemical or biological descriptors to a toxic endpoint. The results of both the case studies show: (i) improved accuracy and sensitivity which is especially desirable under regulatory requirements, and (ii) overall adherence with the OECD/REACH guidelines. Such mechanism based models can be used along with QSAR models for prediction of mechanistically complex toxic endpoints. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  11. Defect evolution in cosmology and condensed matter quantitative analysis with the velocity-dependent one-scale model

    CERN Document Server

    Martins, C J A P

    2016-01-01

    This book sheds new light on topological defects in widely differing systems, using the Velocity-Dependent One-Scale Model to better understand their evolution. Topological defects – cosmic strings, monopoles, domain walls or others - necessarily form at cosmological (and condensed matter) phase transitions. If they are stable and long-lived they will be fossil relics of higher-energy physics. Understanding their behaviour and consequences is a key part of any serious attempt to understand the universe, and this requires modelling their evolution. The velocity-dependent one-scale model is the only fully quantitative model of defect network evolution, and the canonical model in the field. This book provides a review of the model, explaining its physical content and describing its broad range of applicability.

  12. A conceptual DFT approach towards analysing toxicity

    Indian Academy of Sciences (India)

    Unknown

    Effects of population analysis schemes in the cal- culation of ... Introduction. Quantitative structure–activity relationships (QSARs) ..... +See the end of this paper on the journal website: ..... Research, New Delhi for financial assistance and Dr.

  13. Excited States and Photodebromination of Selected Polybrominated Diphenyl Ethers: Computational and Quantitative Structure—Property Relationship Studies

    Directory of Open Access Journals (Sweden)

    Jin Luo

    2015-01-01

    Full Text Available This paper presents a density functional theory (DFT/time-dependent DFT (TD-DFT study on the lowest lying singlet and triplet excited states of 20 selected polybrominateddiphenyl ether (PBDE congeners, with the solvation effect included in the calculations using the polarized continuum model (PCM. The results obtained showed that for most of the brominated diphenyl ether (BDE congeners, the lowest singlet excited state was initiated by the electron transfer from HOMO to LUMO, involving a π–σ* excitation. In triplet excited states, structure of the BDE congeners differed notably from that of the BDE ground states with one of the specific C–Br bonds bending off the aromatic plane. In addition, the partial least squares regression (PLSR, principal component analysis-multiple linear regression analysis (PCA-MLR, and back propagation artificial neural network (BP-ANN approaches were employed for a quantitative structure-property relationship (QSPR study. Based on the previously reported kinetic data for the debromination by ultraviolet (UV and sunlight, obtained QSPR models exhibited a reasonable evaluation of the photodebromination reactivity even when the BDE congeners had same degree of bromination, albeit different patterns of bromination.

  14. Physiological role of Kv1.3 channel in T lymphocyte cell investigated quantitatively by kinetic modeling.

    Directory of Open Access Journals (Sweden)

    Panpan Hou

    Full Text Available Kv1.3 channel is a delayed rectifier channel abundant in human T lymphocytes. Chronic inflammatory and autoimmune disorders lead to the over-expression of Kv1.3 in T cells. To quantitatively study the regulatory mechanism and physiological function of Kv1.3 in T cells, it is necessary to have a precise kinetic model of Kv1.3. In this study, we firstly established a kinetic model capable to precisely replicate all the kinetic features for Kv1.3 channels, and then constructed a T-cell model composed of ion channels including Ca2+-release activated calcium (CRAC channel, intermediate K+ (IK channel, TASK channel and Kv1.3 channel for quantitatively simulating the changes in membrane potentials and local Ca2+ signaling messengers during activation of T cells. Based on the experimental data from current-clamp recordings, we successfully demonstrated that Kv1.3 dominated the membrane potential of T cells to manipulate the Ca2+ influx via CRAC channel. Our results revealed that the deficient expression of Kv1.3 channel would cause the less Ca2+ signal, leading to the less efficiency in secretion. This was the first successful attempt to simulate membrane potential in non-excitable cells, which laid a solid basis for quantitatively studying the regulatory mechanism and physiological role of channels in non-excitable cells.

  15. Novel mathematic models for quantitative transitivity of quality-markers in extraction process of the Buyanghuanwu decoction.

    Science.gov (United States)

    Zhang, Yu-Tian; Xiao, Mei-Feng; Deng, Kai-Wen; Yang, Yan-Tao; Zhou, Yi-Qun; Zhou, Jin; He, Fu-Yuan; Liu, Wen-Long

    2018-06-01

    Nowadays, to research and formulate an efficiency extraction system for Chinese herbal medicine, scientists have always been facing a great challenge for quality management, so that the transitivity of Q-markers in quantitative analysis of TCM was proposed by Prof. Liu recently. In order to improve the quality of extraction from raw medicinal materials for clinical preparations, a series of integrated mathematic models for transitivity of Q-markers in quantitative analysis of TCM were established. Buyanghuanwu decoction (BYHWD) was a commonly TCMs prescription, which was used to prevent and treat the ischemic heart and brain diseases. In this paper, we selected BYHWD as an extraction experimental subject to study the quantitative transitivity of TCM. Based on theory of Fick's Rule and Noyes-Whitney equation, novel kinetic models were established for extraction of active components. Meanwhile, fitting out kinetic equations of extracted models and then calculating the inherent parameters in material piece and Q-marker quantitative transfer coefficients, which were considered as indexes to evaluate transitivity of Q-markers in quantitative analysis of the extraction process of BYHWD. HPLC was applied to screen and analyze the potential Q-markers in the extraction process. Fick's Rule and Noyes-Whitney equation were adopted for mathematically modeling extraction process. Kinetic parameters were fitted and calculated by the Statistical Program for Social Sciences 20.0 software. The transferable efficiency was described and evaluated by potential Q-markers transfer trajectory via transitivity availability AUC, extraction ratio P, and decomposition ratio D respectively. The Q-marker was identified with AUC, P, D. Astragaloside IV, laetrile, paeoniflorin, and ferulic acid were studied as potential Q-markers from BYHWD. The relative technologic parameters were presented by mathematic models, which could adequately illustrate the inherent properties of raw materials

  16. A Quantitative bgl Operon Model for E. coli Requires BglF Conformational Change for Sugar Transport

    Science.gov (United States)

    Chopra, Paras; Bender, Andreas

    The bgl operon is responsible for the metabolism of β-glucoside sugars such as salicin or arbutin in E. coli. Its regulatory system involves both positive and negative feedback mechanisms and it can be assumed to be more complex than that of the more closely studied lac and trp operons. We have developed a quantitative model for the regulation of the bgl operon which is subject to in silico experiments investigating its behavior under different hypothetical conditions. Upon administration of 5mM salicin as an inducer our model shows 80-fold induction, which compares well with the 60-fold induction measured experimentally. Under practical conditions 5-10mM inducer are employed, which is in line with the minimum inducer concentration of 1mM required by our model. The necessity of BglF conformational change for sugar transport has been hypothesized previously, and in line with those hypotheses our model shows only minor induction if conformational change is not allowed. Overall, this first quantitative model for the bgl operon gives reasonable predictions that are close to experimental results (where measured). It will be further refined as values of the parameters are determined experimentally. The model was developed in Systems Biology Markup Language (SBML) and it is available from the authors and from the Biomodels repository [www.ebi.ac.uk/biomodels].

  17. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    International Nuclear Information System (INIS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-01-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution

  18. Multivariate regression models for the simultaneous quantitative analysis of calcium and magnesium carbonates and magnesium oxide through drifts data

    Directory of Open Access Journals (Sweden)

    Marder Luciano

    2006-01-01

    Full Text Available In the present work multivariate regression models were developed for the quantitative analysis of ternary systems using Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS to determine the concentration in weight of calcium carbonate, magnesium carbonate and magnesium oxide. Nineteen spectra of standard samples previously defined in ternary diagram by mixture design were prepared and mid-infrared diffuse reflectance spectra were recorded. The partial least squares (PLS regression method was applied to the model. The spectra set was preprocessed by either mean-centered and variance-scaled (model 2 or mean-centered only (model 1. The results based on the prediction performance of the external validation set expressed by RMSEP (root mean square error of prediction demonstrated that it is possible to develop good models to simultaneously determine calcium carbonate, magnesium carbonate and magnesium oxide content in powdered samples that can be used in the study of the thermal decomposition of dolomite rocks.

  19. Experimental and DFT study of the degradation of 4-chlorophenol on hierarchical micro-/nanostructured oxide films

    Czech Academy of Sciences Publication Activity Database

    Guerin, V. M.; Žouželka, Radek; Bíbová-Lipšová, Hana; Jirkovský, Jaromír; Rathouský, Jiří; Pauporté, T.

    2015-01-01

    Roč. 168, JUN 01 (2015), s. 132-140 ISSN 0926-3373 R&D Projects: GA MK(CZ) DF11P01OVV012 Keywords : 4-Chlorophenol degradation * DFT modeling * ZnO hierarchical nanostructures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 8.328, year: 2015

  20. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 1. Theory and Methodology Based Upon Bootstrap Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Frey, H. Christopher [North Carolina State University, Raleigh, NC (United States); Rhodes, David S. [North Carolina State University, Raleigh, NC (United States)

    1999-04-30

    This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  1. Quantitative T2 mapping evaluation for articular cartilage lesions in a rabbit model of anterior cruciate ligament transection osteoarthritis.

    Science.gov (United States)

    Wei, Zheng-mao; Du, Xiang-ke; Huo, Tian-long; Li, Xu-bin; Quan, Guang-nan; Li, Tian-ran; Cheng, Jin; Zhang, Wei-tao

    2012-03-01

    Quantitative T2 mapping has been a widely used method for the evaluation of pathological cartilage properties, and the histological assessment system of osteoarthritis in the rabbit has been published recently. The aim of the study was to investigate the effectiveness of quantitative T2 mapping evaluation for articular cartilage lesions of a rabbit model of anterior cruciate ligament transection (ACLT) osteoarthritis. Twenty New Zealand White (NZW) rabbits were divided into ACLT surgical group and sham operated group equally. The anterior cruciate ligaments of the rabbits in ACLT group were transected, while the joints were closed intactly in sham operated group. Magnetic resonance (MR) examinations were performed on 3.0T MR unit at week 0, week 6, and week 12. T2 values were computed on GE ADW4.3 workstation. All rabbits were killed at week 13, and left knees were stained with Haematoxylin and Eosin. Semiquantitative histological grading was obtained according to the osteoarthritis cartilage histopathology assessment system. Computerized image analysis was performed to quantitate the immunostained collagen type II. The average MR T2 value of whole left knee cartilage in ACLT surgical group ((29.05±12.01) ms) was significantly higher than that in sham operated group ((24.52±7.97) ms) (P=0.024) at week 6. The average T2 value increased to (32.18±12.79) ms in ACLT group at week 12, but remained near the baseline level ((27.66±8.08) ms) in the sham operated group (P=0.03). The cartilage lesion level of left knee in ACLT group was significantly increased at week 6 (P=0.005) and week 12 (PT2 values had positive correlation with histological grading scores, but inverse correlation with optical densities (OD) of type II collagen. This study demonstrated the reliability and practicability of quantitative T2 mapping for the cartilage injury of rabbit ACLT osteoarthritis model.

  2. Discrimination of Semi-Quantitative Models by Experiment Selection: Method Application in Population Biology

    NARCIS (Netherlands)

    Vatcheva, Ivayla; Bernard, Olivier; de Jong, Hidde; Gouze, Jean-Luc; Mars, Nicolaas; Nebel, B.

    2001-01-01

    Modeling an experimental system often results in a number of alternative models that are justified equally well by the experimental data. In order to discriminate between these models, additional experiments are needed. We present a method for the discrimination of models in the form of

  3. A quantitative analysis of faulty EPCs in the SAP reference model

    NARCIS (Netherlands)

    Mendling, J.; Moser, M.; Neumann, G.; Verbeek, H.M.W.; Dongen, van B.F.; Aalst, van der W.M.P.

    2006-01-01

    The SAP reference model contains more than 600 non-trivial process models expressed in terms of Event-driven Process Chains (EPCs). We have automatically translated these EPCs into YAWL models and analyzed these models usingWofYAWL, a veri¯cation tool based on Petri nets. We discovered that at least

  4. Is HAM/3 (hydrogenic atoms in molecules, version 3 a semiempirical version of dft (density functional theory for ionization processes?

    Directory of Open Access Journals (Sweden)

    Takahata Yuji

    2004-01-01

    Full Text Available We calculated valence-electron vertical ionization potentials (VIPs of nine small molecules, plus uracil and C2F4, by several different methods: semiempirical HAM/3 and AM1 methods, different nonempirical DFT models such as uDI(B88-P86/cc-pVTZ and -epsilon(SAOP/TZP, and ab initio Hartree-Fock (HF /cc-pVTZ. HAM/3 reproduced numerical values more closely to those calculated by the nonempirical DFTs than to those obtained by HF method. Core-electron binding energies (CEBEs of aniline, nitrobenzene and p-nitro aniline, were also calculated by HAM/3 and nonempirical DFT using DE method. A nonempirical DFT model, designated as deltaE KS (PW86-PW91/TZP model, resulted accurate CEBEs (average absolute deviation of 0.14 eV with high efficiency. Although absolute magnitude of HAM/3 CEBEs has error as much as 3 eV, the error in the chemical shifts deltaCEBE is much smaller at 0.55 eV. While the CEBE results do not lead to any definite answer to the question in the title, the trends in valence-electron VIPs indicate that HAM/3 does not approximate DFT with accurate exchange-correlation potentials, but seems to simulate approximate functionals such as B88-P86.

  5. Quantitative Assessment of Thermodynamic Constraints on the Solution Space of Genome-Scale Metabolic Models

    Science.gov (United States)

    Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.

    2013-01-01

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272

  6. Quantitative assessment of thermodynamic constraints on the solution space of genome-scale metabolic models.

    Science.gov (United States)

    Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L

    2013-07-16

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Critical Assessment of TD-DFT for Excited States of Open-Shell Systems: I. Doublet-Doublet Transitions.

    Science.gov (United States)

    Li, Zhendong; Liu, Wenjian

    2016-01-12

    A benchmark set of 11 small radicals is set up to assess the performance of time-dependent density functional theory (TD-DFT) for the excited states of open-shell systems. Both the unrestricted (U-TD-DFT) and spin-adapted (X-TD-DFT) formulations of TD-DFT are considered. For comparison, the well-established EOM-CCSD (equation-of-motion coupled-cluster with singles and doubles) is also used. In total, 111 low-lying singly excited doublet states are accessed by all the three approaches. Taking the MRCISD+Q (multireference configuration interaction with singles and doubles plus the Davidson correction) results as the benchmark, it is found that both U-TD-DFT and EOM-CCSD perform well for those states dominated by singlet-coupled single excitations (SCSE) from closed-shell to open-shell, open-shell to vacant-shell, or closed-shell to vacant-shell orbitals. However, for those states dominated by triplet-coupled single excitations (TCSE) from closed-shell to vacant-shell orbitals, both U-TD-DFT and EOM-CCSD fail miserably due to severe spin contaminations. In contrast, X-TD-DFT provides balanced descriptions of both SCSE and TCSE. As far as the functional dependence is concerned, it is found that, when the Hartree-Fock ground state does not suffer from the instability problem, both global hybrid (GH) and range-separated hybrid (RSH) functionals perform grossly better than pure density functionals, especially for Rydberg and charge-transfer excitations. However, if the Hartree-Fock ground state is instable or nearly instable, GH and RSH tend to underestimate severely the excitation energies. The SAOP (statistically averaging of model orbital potentials) performs more uniformly than any other density functionals, although it generally overestimates the excitation energies of valence excitations. Not surprisingly, both EOM-CCSD and adiabatic TD-DFT are incapable of describing excited states with substantial double excitation characters.

  8. Software Process Validation: Quantitatively Measuring the Correspondence of a Process to a Model

    National Research Council Canada - National Science Library

    Cook, Jonathan E; Wolf, Alexander L

    1997-01-01

    .... When process models and process executions diverge, something significant is happening. The authors have developed techniques for uncovering and measuring the discrepancies between models and executions, which they call process validation...

  9. Muon contact hyperfine field in metals: A DFT calculation

    Science.gov (United States)

    Onuorah, Ifeanyi John; Bonfà, Pietro; De Renzi, Roberto

    2018-05-01

    In positive muon spin rotation and relaxation spectroscopy it is becoming customary to take advantage of density functional theory (DFT) based computational methods to aid the experimental data analysis. DFT-aided muon site determination is especially useful for measurements performed in magnetic materials, where large contact hyperfine interactions may arise. Here we present a systematic analysis of the accuracy of the ab initio estimation of muon's hyperfine contact field on elemental transition metals, performing state-of-the-art spin-polarized plane-wave DFT and using the projector-augmented pseudopotential approach, which allows one to include the core state effects due to the spin ordering. We further validate this method in not-so-simple, noncentrosymmetric metallic compounds, presently of topical interest for their spiral magnetic structure giving rise to skyrmion phases, such as MnSi and MnGe. The calculated hyperfine fields agree with experimental values in all cases, provided the spontaneous spin magnetization of the metal is well reproduced within the approach. To overcome the known limits of the conventional mean-field approximation of DFT on itinerant magnets, we adopt the so-called reduced Stoner theory [L. Ortenzi et al., Phys. Rev. B 86, 064437 (2012), 10.1103/PhysRevB.86.064437]. We establish the accuracy of the estimated muon contact field in metallic compounds with DFT and our results show improved agreement with experiments compared to those of earlier publications.

  10. Real Patient and its Virtual Twin: Application of Quantitative Systems Toxicology Modelling in the Cardiac Safety Assessment of Citalopram.

    Science.gov (United States)

    Patel, Nikunjkumar; Wiśniowska, Barbara; Jamei, Masoud; Polak, Sebastian

    2017-11-27

    A quantitative systems toxicology (QST) model for citalopram was established to simulate, in silico, a 'virtual twin' of a real patient to predict the occurrence of cardiotoxic events previously reported in patients under various clinical conditions. The QST model considers the effects of citalopram and its most notable electrophysiologically active primary (desmethylcitalopram) and secondary (didesmethylcitalopram) metabolites, on cardiac electrophysiology. The in vitro cardiac ion channel current inhibition data was coupled with the biophysically detailed model of human cardiac electrophysiology to investigate the impact of (i) the inhibition of multiple ion currents (I Kr , I Ks , I CaL ); (ii) the inclusion of metabolites in the QST model; and (iii) unbound or total plasma as the operating drug concentration, in predicting clinically observed QT prolongation. The inclusion of multiple ion channel current inhibition and metabolites in the simulation with unbound plasma citalopram concentration provided the lowest prediction error. The predictive performance of the model was verified with three additional therapeutic and supra-therapeutic drug exposure clinical cases. The results indicate that considering only the hERG ion channel inhibition of only the parent drug is potentially misleading, and the inclusion of active metabolite data and the influence of other ion channel currents should be considered to improve the prediction of potential cardiac toxicity. Mechanistic modelling can help bridge the gaps existing in the quantitative translation from preclinical cardiac safety assessment to clinical toxicology. Moreover, this study shows that the QST models, in combination with appropriate drug and systems parameters, can pave the way towards personalised safety assessment.

  11. Quantitative Models of Imperfect Deception in Network Security using Signaling Games with Evidence

    OpenAIRE

    Pawlick, Jeffrey; Zhu, Quanyan

    2017-01-01

    Deception plays a critical role in many interactions in communication and network security. Game-theoretic models called "cheap talk signaling games" capture the dynamic and information asymmetric nature of deceptive interactions. But signaling games inherently model undetectable deception. In this paper, we investigate a model of signaling games in which the receiver can detect deception with some probability. This model nests traditional signaling games and complete information Stackelberg ...

  12. Experiment selection for the discrimination of semi-quantitative models of dynamical systems

    NARCIS (Netherlands)

    Vatcheva, [No Value; de Jong, H; Bernard, O; Mars, NJI

    Modeling an experimental system often results in a number of alternative models that are all justified by the available experimental data. To discriminate among these models, additional experiments are needed. Existing methods for the selection of discriminatory experiments in statistics and in

  13. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  14. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    Science.gov (United States)

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  15. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    Directory of Open Access Journals (Sweden)

    Sho Manabe

    Full Text Available In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  16. Quantitative characterization of initiation and propagation in stress corrosion cracking. An approach of a phenomenological model

    International Nuclear Information System (INIS)

    Raquet, O.

    1994-01-01

    A purely phenomenological study of stress corrosion cracking was performed using the couple Z2CN 18.10 (304L) austenitic stainless steel/boiling MgCl 2 aqueous solution. The exploitation of the morphological information (shape of the cracks and size distribution) available after constant elongation rate tests led to the proposal of an analytical expression of the crack initiation and growth rates. This representation allowed to quantitatively characterize the influence of the applied strain rate as well as the effect of corrosion inhibitors on the crack initiation and propagation phases. It can be used in the search for the stress corrosion cracking mechanisms as a 'riddle' for the determination of the rate controlling steps. As a matter of fact, no mechanistic hypothesis has been used for its development. (author)

  17. Quantitative anatomical basis for a model of micromechanical frequency tuning in the Tokay gecko, Gekko gecko.

    Science.gov (United States)

    Köppl, C; Authier, S

    1995-01-01

    The basilar papilla of the Tokay gecko was studied with standard light- and scanning electron microscopy methods. Several parameters thought to be of particular importance for the mechanical response properties of the system were quantitatively measured, separately for the three different hair-cell areas that are typical for this lizard family. In the basal third, papillar structure was very uniform. The apical two-thirds are subdivided into two hair-cell areas running parallel to each other along the papilla and covered by very different types of tectorial material. Both of those areas showed prominent gradients in hair-cell bundle morphology, i.e., in the height of the stereovillar bundles and the number of stereovilli per bundle, as well as in hair cell density and the size of their respective tectorial covering. Based on the direction of the observed anatomical gradients, a 'reverse' tonotopic organization is suggested, with the highest frequencies represented at the apical end.

  18. Modelling the Kampungkota: A quantitative approach in defining Indonesian informal settlements

    Science.gov (United States)

    Anindito, D. B.; Maula, F. K.; Akbar, R.

    2018-02-01

    Bandung City is home to 2.5 million inhabitants, some of which are living in slums and squatter. However, the terms conveying this type of housing is not adequate to describe that of Indonesian called as kampungkota. Several studies suggest various variables in constituting kampungkota qualitatively. This study delves to define kampungkota in a quantitative manner, using the characteristics of slums and squatter. The samples for this study are 151 villages (kelurahan) in Bandung City. Ordinary Least Squares, Geographically Weighted Regression, and Spatial Cluster and Outlier Analysis are employed. It is suggested that kampungkota may have distinguished variables regarding to its location. As kampungkota may be smaller than administrative area of kelurahan, it can develop beyond the jurisdiction of kelurahan, as indicated by the clustering pattern of kampungkota.

  19. Crater Lakes on Mars: Development of Quantitative Thermal and Geomorphic Models

    Science.gov (United States)

    Barnhart, C. J.; Tulaczyk, S.; Asphaug, E.; Kraal, E. R.; Moore, J.

    2005-01-01

    Impact craters on Mars have served as catchments for channel-eroding surface fluids, and hundreds of examples of candidate paleolakes are documented [1,2] (see Figure 1). Because these features show similarity to terrestrial shorelines, wave action has been hypothesized as the geomorphic agent responsible for the generation of these features [3]. Recent efforts have examined the potential for shoreline formation by wind-driven waves, in order to turn an important but controversial idea into a quantitative, falsifiable hypothesis. These studies have concluded that significant wave-action shorelines are unlikely to have formed commonly within craters on Mars, barring Earth-like weather for approx.1000 years [4,5,6].

  20. Quantitative Metrics and Risk Assessment: The Three Tenets Model of Cybersecurity

    Directory of Open Access Journals (Sweden)

    Jeff Hughes

    2013-08-01

    Full Text Available Progress in operational cybersecurity has been difficult to demonstrate. In spite of the considerable research and development investments made for more than 30 years, many government, industrial, financial, and consumer information systems continue to be successfully attacked and exploited on a routine basis. One of the main reasons that progress has been so meagre is that most technical cybersecurity solutions that have been proposed to-date have been point solutions that fail to address operational tradeoffs, implementation costs, and consequent adversary adaptations across the full spectrum of vulnerabilities. Furthermore, sound prescriptive security principles previously established, such as the Orange Book, have been difficult to apply given current system complexity and acquisition approaches. To address these issues, the authors have developed threat-based descriptive methodologies to more completely identify system vulnerabilities, to quantify the effectiveness of possible protections against those vulnerabilities, and to evaluate operational consequences and tradeoffs of possible protections. This article begins with a discussion of the tradeoffs among seemingly different system security properties such as confidentiality, integrity, and availability. We develop a quantitative framework for understanding these tradeoffs and the issues that arise when those security properties are all in play within an organization. Once security goals and candidate protections are identified, risk/benefit assessments can be performed using a novel multidisciplinary approach, called “QuERIES.” The article ends with a threat-driven quantitative methodology, called “The Three Tenets”, for identifying vulnerabilities and countermeasures in networked cyber-physical systems. The goal of this article is to offer operational guidance, based on the techniques presented here, for informed decision making about cyber-physical system security.

  1. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    Science.gov (United States)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  2. Two-part zero-inflated negative binomial regression model for quantitative trait loci mapping with count trait.

    Science.gov (United States)

    Moghimbeigi, Abbas

    2015-05-07

    Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    Science.gov (United States)

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  4. Development of quantitative structure activity relationship (QSAR) model for disinfection byproduct (DBP) research: A review of methods and resources

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Baiyang, E-mail: poplar_chen@hotmail.com [Harbin Institute of Technology Shenzhen Graduate School, Shenzhen Key Laboratory of Water Resource Utilization and Environmental Pollution Control, Shenzhen 518055 (China); Zhang, Tian [Harbin Institute of Technology Shenzhen Graduate School, Shenzhen Key Laboratory of Water Resource Utilization and Environmental Pollution Control, Shenzhen 518055 (China); Bond, Tom [Department of Civil and Environmental Engineering, Imperial College, London SW7 2AZ (United Kingdom); Gan, Yiqun [Harbin Institute of Technology Shenzhen Graduate School, Shenzhen Key Laboratory of Water Resource Utilization and Environmental Pollution Control, Shenzhen 518055 (China)

    2015-12-15

    Quantitative structure–activity relationship (QSAR) models are tools for linking chemical activities with molecular structures and compositions. Due to the concern about the proliferating number of disinfection byproducts (DBPs) in water and the associated financial and technical burden, researchers have recently begun to develop QSAR models to investigate the toxicity, formation, property, and removal of DBPs. However, there are no standard procedures or best practices regarding how to develop QSAR models, which potentially limit their wide acceptance. In order to facilitate more frequent use of QSAR models in future DBP research, this article reviews the processes required for QSAR model development, summarizes recent trends in QSAR-DBP studies, and shares some important resources for QSAR development (e.g., free databases and QSAR programs). The paper follows the four steps of QSAR model development, i.e., data collection, descriptor filtration, algorithm selection, and model validation; and finishes by highlighting several research needs. Because QSAR models may have an important role in progressing our understanding of DBP issues, it is hoped that this paper will encourage their future use for this application.

  5. Development of quantitative structure activity relationship (QSAR) model for disinfection byproduct (DBP) research: A review of methods and resources

    International Nuclear Information System (INIS)

    Chen, Baiyang; Zhang, Tian; Bond, Tom; Gan, Yiqun

    2015-01-01

    Quantitative structure–activity relationship (QSAR) models are tools for linking chemical activities with molecular structures and compositions. Due to the concern about the proliferating number of disinfection byproducts (DBPs) in water and the associated financial and technical burden, researchers have recently begun to develop QSAR models to investigate the toxicity, formation, property, and removal of DBPs. However, there are no standard procedures or best practices regarding how to develop QSAR models, which potentially limit their wide acceptance. In order to facilitate more frequent use of QSAR models in future DBP research, this article reviews the processes required for QSAR model development, summarizes recent trends in QSAR-DBP studies, and shares some important resources for QSAR development (e.g., free databases and QSAR programs). The paper follows the four steps of QSAR model development, i.e., data collection, descriptor filtration, algorithm selection, and model validation; and finishes by highlighting several research needs. Because QSAR models may have an important role in progressing our understanding of DBP issues, it is hoped that this paper will encourage their future use for this application.

  6. Using the ACT-R architecture to specify 39 quantitative process models of decision making

    Directory of Open Access Journals (Sweden)

    Julian N. Marewski

    2011-08-01

    Full Text Available Hypotheses about decision processes are often formulated qualitatively and remain silent about the interplay of decision, memorial, and other cognitive processes. At the same time, existing decision models are specified at varying levels of detail, making it difficult to compare them. We provide a methodological primer on how detailed cognitive architectures such as ACT-R allow remedying these problems. To make our point, we address a controversy, namely, whether noncompensatory or compensatory processes better describe how people make decisions from the accessibility of memories. We specify 39 models of accessibility-based decision processes in ACT-R, including the noncompensatory recognition heuristic and various other popular noncompensatory and compensatory decision models. Additionally, to illustrate how such models can be tested, we conduct a model comparison, fitting the models to one experiment and letting them generalize to another. Behavioral data are best accounted for by race models. These race models embody the noncompensatory recognition heuristic and compensatory models as a race between competing processes, dissolving the dichotomy between existing decision models.

  7. Substituted group and side chain effects for the porphyrin and zinc(II)–porphyrin derivatives: A DFT and TD-DFT study

    International Nuclear Information System (INIS)

    Tai, Chin-Kuen; Chuang, Wen-Hua; Wang, Bo-Cheng

    2013-01-01

    The DFT/B3LYP/LANL2DZ and TD-DFT calculations have been performed to generate the optimized structures, electronic and photo-physical properties for the porphyrin and zinc(II)–porphyrin (metalloporphyrin) derivatives. The substituted group and side chain effects for these derivatives are discussed in this study. According to the calculation results, the side chain moiety extends the π-delocalization length from the porphyrin core to the side chain moiety. The substituted group with a stronger electron-donating ability increases the energy level of highest occupied molecular orbital (E HOMO ). The side chain moiety with a lower resonance energy decreases E HOMO , the energy level of the lowest unoccupied molecular orbital (E LUMO ), and the energy gap (E g ) between HOMO and LUMO in the porphyrin and zinc(II)–porphyrin derivatives. The natural bonding orbital (NBO) analysis determines the possible electron transfer mechanism from the electron-donating to -withdrawing groups (the side chain moiety) in these porphyrin derivatives. The projected density of state (PDOS) analysis shows that the electron-donating group affects the electron density distribution in both HOMO and LUMO, and the side chain moiety influence the electron density distribution in LUMO. The calculated photo-physical properties (absorption wavelengths and the related oscillator strength, f) in dichloromethane environment for porphyrin and zinc(II)–porphyrin derivatives have been simulated by using the TD-DFT method within the Polarizable Continuum Model (PCM). The present of both of the substituted group and the side chain moiety in these derivatives results in a red shift and broadening of the range of the absorption peaks of the Q/Soret band as compared to porphin. -- Highlights: • Side chain moiety extends the π-delocalization for the porphyrins. • Substituted group increases the energy of highest occupied molecular orbital. • Side chain moiety influences the Q/Soret band of

  8. A Framework for Quantitative Modeling of Neural Circuits Involved in Sleep-to-Wake Transition

    Directory of Open Access Journals (Sweden)

    Siamak eSorooshyari

    2015-02-01

    Full Text Available Identifying the neuronal circuits and dynamics of sleep-to-wake transition is essential to understanding brain regulation of behavioral states, including sleep-wake cycles, arousal, and hyperarousal. Recent work by different laboratories has used optogenetics to determine the role of individual neuromodulators in state transitions. The optogenetically-driven data does not yet provide a multi-dimensional schematic of the mechanisms underlying changes in vigilance states. This work presents a modeling framework to interpret, assist, and drive research on the sleep-regulatory network. We identify feedback, redundancy, and gating hierarchy as three fundamental aspects of this model. The presented model is expected to expand as additional data on the contribution of each transmitter to a vigilance state becomes available. Incorporation of conductance-based models of neuronal ensembles into this model and existing models of cortical excitability will provide more comprehensive insight into sleep dynamics as well as sleep and arousal-related disorders.

  9. Quantitative, steady-state properties of Catania's computational model of the operant reserve.

    Science.gov (United States)

    Berg, John P; McDowell, J J

    2011-05-01

    Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. FAST TRACK COMMUNICATION A DFT + DMFT approach for nanosystems

    Science.gov (United States)

    Turkowski, Volodymyr; Kabir, Alamgir; Nayyar, Neha; Rahman, Talat S.

    2010-11-01

    We propose a combined density-functional-theory-dynamical-mean-field-theory (DFT + DMFT) approach for reliable inclusion of electron-electron correlation effects in nanosystems. Compared with the widely used DFT + U approach, this method has several advantages, the most important of which is that it takes into account dynamical correlation effects. The formalism is illustrated through different calculations of the magnetic properties of a set of small iron clusters (number of atoms 2 <= N <= 5). It is shown that the inclusion of dynamical effects leads to a reduction in the cluster magnetization (as compared to results from DFT + U) and that, even for such small clusters, the magnetization values agree well with experimental estimations. These results justify confidence in the ability of the method to accurately describe the magnetic properties of clusters of interest to nanoscience.

  11. Conformational, vibrational, NMR and DFT studies of N-methylacetanilide.

    Science.gov (United States)

    Arjunan, V; Santhanam, R; Rani, T; Rosi, H; Mohan, S

    2013-03-01

    A detailed conformational, vibrational, NMR and DFT studies of N-methylacetanilide have been carried out. In DFT, B3LYP method have been used with 6-31G(**), 6-311++G(**) and cc-pVTZ basis sets. The vibrational frequencies were calculated resulting in IR and Raman frequencies together with intensities and Raman depolarisation ratios. The dipole moment derivatives were computed analytically. Owing to the complexity of the molecule, the potential energy distributions of the vibrational modes of the compound are also calculated. Isoelectronic molecular electrostatic potential surface (MEP) and electron density surface were examined. (1)H and (13)C NMR isotropic chemical shifts were calculated and the assignments made are compared with the experimental values. The energies of important MO's of the compound were also determined from TD-DFT method. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A suite of models to support the quantitative assessment of spread in pest risk analysis.

    Science.gov (United States)

    Robinet, Christelle; Kehlenbeck, Hella; Kriticos, Darren J; Baker, Richard H A; Battisti, Andrea; Brunel, Sarah; Dupin, Maxime; Eyre, Dominic; Faccoli, Massimo; Ilieva, Zhenya; Kenis, Marc; Knight, Jon; Reynaud, Philippe; Yart, Annie; van der Werf, Wopke

    2012-01-01

    Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice.

  13. A quantitative approach to developing more mechanistic gas exchange models for field grown potato

    DEFF Research Database (Denmark)

    Ahmadi, Seyed Hamid; Andersen, Mathias Neumann; Poulsen, Rolf Thostrup

    2009-01-01

    In this study we introduce new gas exchange models that are developed under natural conditions of field grown potato. The new models could explain about 85% of the stomatal conductance variations, which was much higher than the well-known gas exchange models such as the Ball-Berry model [Ball...... of chemical and hydraulic signalling on stomatal conductance as exp(-β[ABA])exp(-δ|ψ|) in which [ABA] and |ψ| are xylem ABA concentration and absolute value of leaf or stem water potential. In this study we found that stem water potential could be a very reliable indicator of how plant water status affects...

  14. A method of evaluating quantitative magnetospheric field models by an angular parameter alpha

    Science.gov (United States)

    Sugiura, M.; Poros, D. J.

    1979-01-01

    The paper introduces an angular parameter, termed alpha, which represents the angular difference between the observed, or model, field and the internal model field. The study discusses why this parameter is chosen and demonstrates its usefulness by applying it to both observations and models. In certain areas alpha is more sensitive than delta-B (the difference between the magnitude of the observed magnetic field and that of the earth's internal field calculated from a spherical harmonic expansion) in expressing magnetospheric field distortions. It is recommended to use both alpha and delta-B in comparing models with observations.

  15. A Hybrid Fuzzy Inference System Based on Dispersion Model for Quantitative Environmental Health Impact Assessment of Urban Transportation Planning

    Directory of Open Access Journals (Sweden)

    Behnam Tashayo

    2017-01-01

    Full Text Available Characterizing the spatial variation of traffic-related air pollution has been and is a long-standing challenge in quantitative environmental health impact assessment of urban transportation planning. Advanced approaches are required for modeling complex relationships among traffic, air pollution, and adverse health outcomes by considering uncertainties in the available data. A new hybrid fuzzy model is developed and implemented through hierarchical fuzzy inference system (HFIS. This model is integrated with a dispersion model in order to model the effect of transportation system on the PM2.5 concentration. An improved health metric is developed as well based on a HFIS to model the impact of traffic-related PM2.5 on health. Two solutions are applied to improve the performance of both the models: the topologies of HFISs are selected according to the problem and used variables, membership functions, and rule set are determined through learning in a simultaneous manner. The capabilities of this proposed approach is examined by assessing the impacts of three traffic scenarios involved in air pollution in the city of Isfahan, Iran, and the model accuracy compared to the results of available models from literature. The advantages here are modeling the spatial variation of PM2.5 with high resolution, appropriate processing requirements, and considering the interaction between emissions and meteorological processes. These models are capable of using the available qualitative and uncertain data. These models are of appropriate accuracy, and can provide better understanding of the phenomena in addition to assess the impact of each parameter for the planners.

  16. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    Science.gov (United States)

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  17. Quantitative biokinetic analysis of radioactively labelled, inhaled Titanium dioxide Nanoparticles in a rat model

    Energy Technology Data Exchange (ETDEWEB)

    Kreyling, Wolfgang G.; Wenk, Alexander; Semmler-Behnke, Manuela [Helmholtz Zentrum Muenchen, Deutsches Forschungszentrum fuer Gesundheit und Umwelt GmbH (Germany). Inst. fuer Lungenbiologie und Erkrankungen, Netzwerk Nanopartikel und Gesundheit

    2010-09-15

    The aim of this project was the determination of the biokinetics of TiO{sub 2} nanoparticles (NP) in the whole body of healthy adult rats after NP administration to the respiratory tract - either via inhalation or instillation. We developed an own methodology to freshly synthesize and aerosolize TiO{sub 2}-NP in our lab for the use of inhalation studies. These NP underwent a detailed physical and chemical characterization providing pure polycrystalline anatase TiO{sub 2}-NP of about 20 nm (geometric standard deviation 1.6) and a specific surface area of 270 m{sup 2}/g. In addition, we developed techniques for sufficiently stable radioactive {sup 48}V labelling of the TiO{sub 2} NP. The kinetics of solubility of {sup 48}V was thoroughly determined. The methodology of quantitative biokinetics allows for a quantitative balance of the retained and excreted NP in control of the administered NP dose and provides a much more precise determination of NP fractions and concentrations of NP in organs and tissues of interest as compared to spotting biokinetics studies. Small fractions of TiO{sub 2}-NP translocate across the air-blood-barrier and accumulate in secondary target organs, soft tissue and skeleton. The amount of translocated TiO{sub 2}-NP is approximately 2% of TiO{sub 2}-NP deposited in the lungs. A prominent fraction of these translocated TiO{sub 2}-NP was found in the remainder. Smaller amounts of TiO{sub 2}-NP accumulate in secondary organs following particular kinetics. TiO{sub 2}-NP translocation was grossly accomplished within the first 2-4 hours after inhalation followed by retention in all organs and tissues studied without any detectable clearance of these biopersistent TiO{sub 2}-NP within 28 days. Therefore, our data suggest crossing of the air-blood-barrier of the lungs and subsequent accumulation in secondary organs and tissues depends on the NP material and its physico-chemical properties. Furthermore, we extrapolate that during repeated or chronic

  18. Quantitative biokinetic analysis of radioactively labelled, inhaled Titanium dioxide Nanoparticles in a rat model

    International Nuclear Information System (INIS)

    Kreyling, Wolfgang G.; Wenk, Alexander; Semmler-Behnke, Manuela

    2010-01-01

    The aim of this project was the determination of the biokinetics of TiO 2 nanoparticles (NP) in the whole body of healthy adult rats after NP administration to the respiratory tract - either via inhalation or instillation. We developed an own methodology to freshly synthesize and aerosolize TiO 2 -NP in our lab for the use of inhalation studies. These NP underwent a detailed physical and chemical characterization providing pure polycrystalline anatase TiO 2 -NP of about 20 nm (geometric standard deviation 1.6) and a specific surface area of 270 m 2 /g. In addition, we developed techniques for sufficiently stable radioactive 48 V labelling of the TiO 2 NP. The kinetics of solubility of 48 V was thoroughly determined. The methodology of quantitative biokinetics allows for a quantitative balance of the retained and excreted NP in control of the administered NP dose and provides a much more precise determination of NP fractions and concentrations of NP in organs and tissues of interest as compared to spotting biokinetics studies. Small fractions of TiO 2 -NP translocate across the air-blood-barrier and accumulate in secondary target organs, soft tissue and skeleton. The amount of translocated TiO 2 -NP is approximately 2% of TiO 2 -NP deposited in the lungs. A prominent fraction of these translocated TiO 2 -NP was found in the remainder. Smaller amounts of TiO 2 -NP accumulate in secondary organs following particular kinetics. TiO 2 -NP translocation was grossly accomplished within the first 2-4 hours after inhalation followed by retention in all organs and tissues studied without any detectable clearance of these biopersistent TiO 2 -NP within 28 days. Therefore, our data suggest crossing of the air-blood-barrier of the lungs and subsequent accumulation in secondary organs and tissues depends on the NP material and its physico-chemical properties. Furthermore, we extrapolate that during repeated or chronic exposure to insoluble NP the translocated fraction of NP will

  19. Quantitative detection and biological propagation of scrapie seeding activity in vitro facilitate use of prions as model pathogens for disinfection.

    Directory of Open Access Journals (Sweden)

    Sandra Pritzkow

    Full Text Available Prions are pathogens with an unusually high tolerance to inactivation and constitute a complex challenge to the re-processing of surgical instruments. On the other hand, however, they provide an informative paradigm which has been exploited successfully for the development of novel broad-range disinfectants simultaneously active also against bacteria, viruses and fungi. Here we report on the development of a methodological platform that further facilitates the use of scrapie prions as model pathogens for disinfection. We used specifically adapted serial protein misfolding cyclic amplification (PMCA for the quantitative detection, on steel wires providing model carriers for decontamination, of 263K scrapie seeding activity converting normal protease-sensitive into abnormal protease-resistant prion protein. Reference steel wires carrying defined amounts of scrapie infectivity were used for assay calibration, while scrapie-contaminated test steel wires were subjected to fifteen different procedures for disinfection that yielded scrapie titre reductions of ≤10(1- to ≥10(5.5-fold. As confirmed by titration in hamsters the residual scrapie infectivity on test wires could be reliably deduced for all examined disinfection procedures, from our quantitative seeding activity assay. Furthermore, we found that scrapie seeding activity present in 263K hamster brain homogenate or multiplied by PMCA of scrapie-contaminated steel wires both triggered accumulation of protease-resistant prion protein and was further propagated in a novel cell assay for 263K scrapie prions, i.e., cerebral glial cell cultures from hamsters. The findings from our PMCA- and glial cell culture assays revealed scrapie seeding activity as a biochemically and biologically replicative principle in vitro, with the former being quantitatively linked to prion infectivity detected on steel wires in vivo. When combined, our in vitro assays provide an alternative to titrations of biological

  20. Absolute quantitation of myocardial blood flow with 201Tl and dynamic SPECT in canine: optimisation and validation of kinetic modelling

    International Nuclear Information System (INIS)

    Iida, Hidehiro; Kim, Kyeong-Min; Nakazawa, Mayumi; Sohlberg, Antti; Zeniya, Tsutomu; Hayashi, Takuya; Watabe, Hiroshi; Eberl, Stefan; Tamura, Yoshikazu; Ono, Yukihiko

    2008-01-01

    201 Tl has been extensively used for myocardial perfusion and viability assessment. Unlike 99m Tc-labelled agents, such as 99m Tc-sestamibi and 99m Tc-tetrofosmine, the regional concentration of 201 Tl varies with time. This study is intended to validate a kinetic modelling approach for in vivo quantitative estimation of regional myocardial blood flow (MBF) and volume of distribution of 201 Tl using dynamic SPECT. Dynamic SPECT was carried out on 20 normal canines after the intravenous administration of 201 Tl using a commercial SPECT system. Seven animals were studied at rest, nine during adenosine infusion, and four after beta-blocker administration. Quantitative images were reconstructed with a previously validated technique, employing OS-EM with attenuation-correction, and transmission-dependent convolution subtraction scatter correction. Measured regional time-activity curves in myocardial segments were fitted to two- and three-compartment models. Regional MBF was defined as the influx rate constant (K 1 ) with corrections for the partial volume effect, haematocrit and limited first-pass extraction fraction, and was compared with that determined from radio-labelled microspheres experiments. Regional time-activity curves responded well to pharmacological stress. Quantitative MBF values were higher with adenosine and decreased after beta-blocker compared to a resting condition. MBFs obtained with SPECT (MBF SPECT ) correlated well with the MBF values obtained by the radio-labelled microspheres (MBF MS ) (MBF SPECT = -0.067 + 1.042 x MBF MS , p 201 Tl and dynamic SPECT. (orig.)

  1. Quantitative modeling assesses the contribution of bond strengthening, rebinding and force sharing to the avidity of biomolecule interactions.

    Directory of Open Access Journals (Sweden)

    Valentina Lo Schiavo

    Full Text Available Cell adhesion is mediated by numerous membrane receptors. It is desirable to derive the outcome of a cell-surface encounter from the molecular properties of interacting receptors and ligands. However, conventional parameters such as affinity or kinetic constants are often insufficient to account for receptor efficiency. Avidity is a qualitative concept frequently used to describe biomolecule interactions: this includes incompletely defined properties such as the capacity to form multivalent attachments. The aim of this study is to produce a working description of monovalent attachments formed by a model system, then to measure and interpret the behavior of divalent attachments under force. We investigated attachments between antibody-coated microspheres and surfaces coated with sparse monomeric or dimeric ligands. When bonds were subjected to a pulling force, they exhibited both a force-dependent dissociation consistent with Bell's empirical formula and a force- and time-dependent strengthening well described by a single parameter. Divalent attachments were stronger and less dependent on forces than monovalent ones. The proportion of divalent attachments resisting a force of 30 piconewtons for at least 5 s was 3.7 fold higher than that of monovalent attachments. Quantitative modeling showed that this required rebinding, i.e. additional bond formation between surfaces linked by divalent receptors forming only one bond. Further, experimental data were compatible with but did not require stress sharing between bonds within divalent attachments. Thus many ligand-receptor interactions do not behave as single-step reactions in the millisecond to second timescale. Rather, they exhibit progressive stabilization. This explains the high efficiency of multimerized or clustered receptors even when bonds are only subjected to moderate forces. Our approach provides a quantitative way of relating binding avidity to measurable parameters including bond

  2. Quantitative Modeling Assesses the Contribution of Bond Strengthening, Rebinding and Force Sharing to the Avidity of Biomolecule Interactions

    Science.gov (United States)

    Lo Schiavo, Valentina; Robert, Philippe; Limozin, Laurent; Bongrand, Pierre

    2012-01-01

    Cell adhesion is mediated by numerous membrane receptors. It is desirable to derive the outcome of a cell-surface encounter from the molecular properties of interacting receptors and ligands. However, conventional parameters such as affinity or kinetic constants are often insufficient to account for receptor efficiency. Avidity is a qualitative concept frequently used to describe biomolecule interactions: this includes incompletely defined properties such as the capacity to form multivalent attachments. The aim of this study is to produce a working description of monovalent attachments formed by a model system, then to measure and interpret the behavior of divalent attachments under force. We investigated attachments between antibody-coated microspheres and surfaces coated with sparse monomeric or dimeric ligands. When bonds were subjected to a pulling force, they exhibited both a force-dependent dissociation consistent with Bell’s empirical formula and a force- and time-dependent strengthening well described by a single parameter. Divalent attachments were stronger and less dependent on forces than monovalent ones. The proportion of divalent attachments resisting a force of 30 piconewtons for at least 5 s was 3.7 fold higher than that of monovalent attachments. Quantitative modeling showed that this required rebinding, i.e. additional bond formation between surfaces linked by divalent receptors forming only one bond. Further, experimental data were compatible with but did not require stress sharing between bonds within divalent attachments. Thus many ligand-receptor interactions do not behave as single-step reactions in the millisecond to second timescale. Rather, they exhibit progressive stabilization. This explains the high efficiency of multimerized or clustered receptors even when bonds are only subjected to moderate forces. Our approach provides a quantitative way of relating binding avidity to measurable parameters including bond maturation, rebinding and

  3. Multiple methods for multiple futures: Integrating qualitative scenario planning and quantitative simulation modeling for natural resource decision making

    Science.gov (United States)

    Symstad, Amy J.; Fisichelli, Nicholas A.; Miller, Brian W.; Rowland, Erika; Schuurman, Gregor W.

    2017-01-01

    Scenario planning helps managers incorporate climate change into their natural resource decision making through a structured “what-if” process of identifying key uncertainties and potential impacts and responses. Although qualitative scenarios, in which ecosystem responses to climate change are derived via expert opinion, often suffice for managers to begin addressing climate change in their planning, this approach may face limits in resolving the responses of complex systems to altered climate conditions. In addition, this approach may fall short of the scientific credibility managers often require to take actions that differ from current practice. Quantitative simulation modeling of ecosystem response to climate conditions and management actions can provide this credibility, but its utility is limited unless the modeling addresses the most impactful and management-relevant uncertainties and incorporates realistic management actions. We use a case study to compare and contrast management implications derived from qualitative scenario narratives and from scenarios supported by quantitative simulations. We then describe an analytical framework that refines the case study’s integrated approach in order to improve applicability of results to management decisions. The case study illustrates the value of an integrated approach for identifying counterintuitive system dynamics, refining understanding of complex relationships, clarifying the magnitude and timing of changes, identifying and checking the validity of assumptions about resource responses to climate, and refining management directions. Our proposed analytical framework retains qualitative scenario planning as a core element because its participatory approach builds understanding for both managers and scientists, lays the groundwork to focus quantitative simulations on key system dynamics, and clarifies the challenges that subsequent decision making must address.

  4. Modeling grain boundaries in polycrystals using cohesive elements: Qualitative and quantitative analysis

    Energy Technology Data Exchange (ETDEWEB)

    El Shawish, Samir, E-mail: Samir.ElShawish@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Cizelj, Leon [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Simonovski, Igor [European Commission, DG-JRC, Institute for Energy and Transport, P.O. Box 2, NL-1755 ZG Petten (Netherlands)

    2013-08-15

    Highlights: ► We estimate the performance of cohesive elements for modeling grain boundaries. ► We compare the computed stresses in ABAQUS finite element solver. ► Tests are performed in analytical and realistic models of polycrystals. ► Most severe issue is found within the plastic grain response. ► Other identified issues are related to topological constraints in modeling space. -- Abstract: We propose and demonstrate several tests to estimate the performance of the cohesive elements in ABAQUS for modeling grain boundaries in complex spatial structures such as polycrystalline aggregates. The performance of the cohesive elements is checked by comparing the computed stresses with the theoretically predicted values for a homogeneous material under uniaxial tensile loading. Statistical analyses are performed under different loading conditions for two elasto-plastic models of the grains: isotropic elasticity with isotropic hardening plasticity and anisotropic elasticity with crystal plasticity. Tests are conducted on an analytical finite element model generated from Voronoi tessellation as well as on a realistic finite element model of a stainless steel wire. The results of the analyses highlight several issues related to the computation of normal and shear stresses. The most severe issue is found within the plastic grain response where the computed normal stresses on a particularly oriented cohesive elements are significantly underestimated. Other issues are found to be related to topological constraints in the modeling space and result in the increased scatter of the computed stresses.

  5. Quantitative groundwater modelling for a sustainable water resource exploitation in a Mediterranean alluvial aquifer

    Science.gov (United States)

    Laïssaoui, Mounir; Mesbah, Mohamed; Madani, Khodir; Kiniouar, Hocine

    2018-05-01

    To analyze the water budget under human influences in the Isser wadi alluvial aquifer in the northeast of Algeria, we built a mathematical model which can be used for better managing groundwater exploitation. A modular three-dimensional finite-difference groundwater flow model (MODFLOW) was used. The modelling system is largely based on physical laws and employs a numerical method of the finite difference to simulate water movement and fluxes in a horizontally discretized field. After calibration in steady-state, the model could reproduce the initial heads with a rather good precision. It enabled us to quantify the aquifer water balance terms and to obtain a conductivity zones distribution. The model also highlighted the relevant role of the Isser wadi which constitutes a drain of great importance for the aquifer, ensuring alone almost all outflows. The scenarios suggested in transient simulations showed that an increase in the pumping would only increase the lowering of the groundwater levels and disrupting natural balance of aquifer. However, it is clear that this situation depends primarily on the position of pumping wells in the plain as well as on the extracted volumes of water. As proven by the promising results of model, this physically based and distributed-parameter model is a valuable contribution to the ever-advancing technology of hydrological modelling and water resources assessment.

  6. Statistical modelling approach to derive quantitative nanowastes classification index; estimation of nanomaterials exposure

    CSIR Research Space (South Africa)

    Ntaka, L

    2013-08-01

    Full Text Available . In this work, statistical inference approach specifically the non-parametric bootstrapping and linear model were applied. Data used to develop the model were sourced from the literature. 104 data points with information on aggregation, natural organic matter...

  7. Quantitative Modeling of Human Performance in Information Systems. Technical Research Note 232.

    Science.gov (United States)

    Baker, James D.

    1974-01-01

    A general information system model was developed which focuses on man and considers the computer only as a tool. The ultimate objective is to produce a simulator which will yield measures of system performance under different mixes of equipment, personnel, and procedures. The model is structured around three basic dimensions: (1) data flow and…

  8. THE QUANTITATIVE MODEL OF THE FINALIZATIONS IN MEN’S COMPETITIVE HANDBALL AND THEIR EFFICIENCY

    Directory of Open Access Journals (Sweden)

    Eftene Alexandru

    2009-10-01

    Full Text Available In the epistemic steps, we approach a competitive performance behavior model build after a quantitativeanalysis of certain data collected from the official International Handball Federation protocols on theperformance of the first four teams of the World Men's Handball Championship - Croatia 2009, duringsemifinals and finals.This model is a part of the integrative (global model of the handball game, which will be graduallyinvestigated during the following research.I have started the construction of this model from the premise that the finalization represents theessence of the game.The components of our model, in a prioritized order: shot at the goal from 9m- 15p; shot at the goalfrom 6m- 12p; shot at the goal from 7m- 12p; fast break shot at the goal - 11,5p; wing shot at the goal - 8,5p;penetration shot at the goal - 7p;

  9. The participative method of subject definition as used in the quantitative modelling of hospital laundry services.

    Science.gov (United States)

    Hammer, K A; Janes, F R

    1995-01-01

    The objectives for developing the participative method of subject definition were to gain all the relevant information to a high level of fidelity in the earliest stages of the work and so be able to build a realistic model at reduced labour cost. In order to better integrate the two activities--information acquisition and mathematical modelling--a procedure was devised using the methods of interactive management to facilitate teamwork. This procedure provided the techniques to create suitable working relationships between the two groups, the informants and the modellers, so as to maximize their free and accurate intercommunication, both during the initial definition of the linen service and during the monitoring of the accuracy and reality of the draft models. The objectives of this project were met in that the final model was quickly validated and approved, at a low labour cost.

  10. TU-G-303-00: Radiomics: Advances in the Use of Quantitative Imaging Used for Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-06-15

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  11. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    Science.gov (United States)

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  12. TU-G-303-00: Radiomics: Advances in the Use of Quantitative Imaging Used for Predictive Modeling

    International Nuclear Information System (INIS)

    2015-01-01

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  13. Quantitative use of Palaeo-Proxy Data in Global Circulation Models

    Science.gov (United States)

    Collins, M.

    2003-04-01

    It is arguably one of the ultimate aims of palaeo-modelling science to somehow "get the palaeo-proxy data into the model" i.e. to constrain the climate of the model the trajectory of the real climate recorded in the palaeo data. The traditional way of interfacing data with models is to use data assimilation. This presents a number of problems in the palaeo context as the data are more often representative of seasonal to annual or decadal climate and models have time steps of order minutes, hence the model increments are likely to be vanishingly small. Also, variational data assimilation schemes would require the adjoint of the coupled ocean-atmosphere model and the adjoint of the functions which translate model variables such as temperature and precipitation into the palaeo-proxies, both of which are hard to determine because of the high degree of non-linearity in the system and the wide range of space and time scales. An alternative is to add forward models of proxies to the model and run "many years" of simulation until an analog state is found which matches the palaeo data for each season, year, decade etc. Clearly "many years" might range from a few thousand years to almost infinity and depends on the number of degrees of freedom in the climate system and on the error characteristics of the palaeo data. The length of simulation required is probably beyond the supercomputer capacity of a single institution and hence an alternative is to use idle capacity of home and business personal computers - the climateprediction.net project.

  14. Implementation of DFT application on ternary optical computer

    Science.gov (United States)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  15. Quantitative cardiac phosphoproteomics profiling during ischemia-reperfusion in an immature swine model

    Energy Technology Data Exchange (ETDEWEB)

    Ledee, Dolena R.; Kang, Min A.; Kajimoto, Masaki; Purvine, Samuel O.; Brewer, Heather M.; Pasa Tolic, Ljiljana; Portman, Michael A.

    2017-07-01

    Ischemia-reperfusion (I/R) results in altered metabolic and molecular responses, and phosphorylation is one of the most noted regulatory mechanisms mediating signaling mechanisms during physiological stresses. To expand our knowledge of the potential phosphoproteomic changes in the myocardium during I/R, we used Isobaric Tags for Relative and Absolute Quantitation-based analyses in left ventricular samples obtained from porcine hearts under control or I/R conditions. The data are available via ProteomeXchange with identifier PXD006066. We identified 1,896 phosphopeptides within left ventricular control and I/R porcine samples. Significant differential phosphorylation between control and I/R groups was discovered in 111 phosphopeptides from 86 proteins. Analysis of the phosphopeptides using Motif-x identified five motifs: (..R..S..), (..SP..), (..S.S..), (..S…S..), and (..S.T..). Semiquantitative immunoblots confirmed site location and directional changes in phosphorylation for phospholamban and pyruvate dehydrogenase E1, two proteins known to be altered by I/R and identified by this study. Novel phosphorylation sites associated with I/R were also identified. Functional characterization of the phosphopeptides identified by our methodology could expand our understanding of the signaling mechanisms involved during I/R damage in the heart as well as identify new areas to target therapeutic strategies.

  16. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout.

    Science.gov (United States)

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  17. Transcriptome discovery in non-model wild fish species for the development of quantitative transcript abundance assays

    Science.gov (United States)

    Hahn, Cassidy M.; Iwanowicz, Luke R.; Cornman, Robert S.; Mazik, Patricia M.; Blazer, Vicki S.

    2016-01-01

    Environmental studies increasingly identify the presence of both contaminants of emerging concern (CECs) and legacy contaminants in aquatic environments; however, the biological effects of these compounds on resident fishes remain largely unknown. High throughput methodologies were employed to establish partial transcriptomes for three wild-caught, non-model fish species; smallmouth bass (Micropterus dolomieu), white sucker (Catostomus commersonii) and brown bullhead (Ameiurus nebulosus). Sequences from these transcriptome databases were utilized in the development of a custom nCounter CodeSet that allowed for direct multiplexed measurement of 50 transcript abundance endpoints in liver tissue. Sequence information was also utilized in the development of quantitative real-time PCR (qPCR) primers. Cross-species hybridization allowed the smallmouth bass nCounter CodeSet to be used for quantitative transcript abundance analysis of an additional non-model species, largemouth bass (Micropterus salmoides). We validated the nCounter analysis data system with qPCR for a subset of genes and confirmed concordant results. Changes in transcript abundance biomarkers between sexes and seasons were evaluated to provide baseline data on transcript modulation for each species of interest.

  18. Design and construction of a quantitative model for the management of technology transfer at the Mexican elementary school system

    Directory of Open Access Journals (Sweden)

    Mariel Alfaro Ponce

    2018-01-01

    Full Text Available Nowadays, schools in Mexico have financial autonomy to invest in infrastructure, although they must adjust their spending to national education projects. This represents a challenge, since it is complex to predict the effectiveness that an ICT (Information and Communication Technology project will have in certain areas of the country that do not even have the necessary infrastructure to start up. To address this problem, it is important to provide schools with a System for Technological Management (STM, that allows them to identify, select, acquire, adopt and assimilate technologies. In this paper, the implementation of a quantitative model applied to a STM is presented. The quantitative model employs parameters of schools, regarding basic infrastructure such as essential services, computer devices, and connectivity, among others. The results of the proposed system are presented, where from the 5 possible points for the correct transfer, only 3.07 are obtained, where the highest is close to 0.88 with the availability of electric energy and the lowest is with the internet connectivity and availability with a 0.36 and 0.39 respectively which can strongly condition the success of the program.

  19. Transcriptome discovery in non-model wild fish species for the development of quantitative transcript abundance assays.

    Science.gov (United States)

    Hahn, Cassidy M; Iwanowicz, Luke R; Cornman, Robert S; Mazik, Patricia M; Blazer, Vicki S

    2016-12-01

    Environmental studies increasingly identify the presence of both contaminants of emerging concern (CECs) and legacy contaminants in aquatic environments; however, the biological effects of these compounds on resident fishes remain largely unknown. High throughput methodologies were employed to establish partial transcriptomes for three wild-caught, non-model fish species; smallmouth bass (Micropterus dolomieu), white sucker (Catostomus commersonii) and brown bullhead (Ameiurus nebulosus). Sequences from these transcriptome databases were utilized in the development of a custom nCounter CodeSet that allowed for direct multiplexed measurement of 50 transcript abundance endpoints in liver tissue. Sequence information was also utilized in the development of quantitative real-time PCR (qPCR) primers. Cross-species hybridization allowed the smallmouth bass nCounter CodeSet to be used for quantitative transcript abundance analysis of an additional non-model species, largemouth bass (Micropterus salmoides). We validated the nCounter analysis data system with qPCR for a subset of genes and confirmed concordant results. Changes in transcript abundance biomarkers between sexes and seasons were evaluated to provide baseline data on transcript modulation for each species of interest. Published by Elsevier Inc.

  20. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  1. Quantitative Analysis of Memristance Defined Exponential Model for Multi-bits Titanium Dioxide Memristor Memory Cell

    Directory of Open Access Journals (Sweden)

    DAOUD, A. A. D.

    2016-05-01

    Full Text Available The ability to store multiple bits in a single memristor based memory cell is a key feature for high-capacity memory packages. Studying multi-bit memristor circuits requires high accuracy in modelling the memristance change. A memristor model based on a novel definition of memristance is proposed. A design of a single memristor memory cell using the proposed model for the platinum electrodes titanium dioxide memristor is illustrated. A specific voltage pulse is used with varying its parameters (amplitude or pulse width to store different number of states in a single memristor. New state variation parameters associated with the utilized model are provided and their effects on write and read processes of memristive multi-states are analysed. PSPICE simulations are also held, and they show a good agreement with the data obtained from the analysis.

  2. Quantitative intracellular flux modeling and applications in biotherapeutic development and production using CHO cell cultures.

    Science.gov (United States)

    Huang, Zhuangrong; Lee, Dong-Yup; Yoon, Seongkyu

    2017-12-01

    Chinese hamster ovary (CHO) cells have been widely used for producing many recombinant therapeutic proteins. Constraint-based modeling, such as flux balance analysis (FBA) and metabolic flux analysis (MFA), has been developing rapidly for the quantification of intracellular metabolic flux distribution at a systematic level. Such methods would produce detailed maps of flows through metabolic networks, which contribute significantly to better understanding of metabolism in cells. Although these approaches have been extensively established in microbial systems, their application to mammalian cells is sparse. This review brings together the recent development of constraint-based models and their applications in CHO cells. The further development of constraint-based modeling approaches driven by multi-omics datasets is discussed, and a framework of potential modeling application in cell culture engineering is proposed. Improved cell culture system understanding will enable robust developments in cell line and bioprocess engineering thus accelerating consistent process quality control in biopharmaceutical manufacturing. © 2017 Wiley Periodicals, Inc.

  3. Hierarchy of models: From qualitative to quantitative analysis of circadian rhythms in cyanobacteria

    Science.gov (United States)

    Chaves, M.; Preto, M.

    2013-06-01

    A hierarchy of models, ranging from high to lower levels of abstraction, is proposed to construct "minimal" but predictive and explanatory models of biological systems. Three hierarchical levels will be considered: Boolean networks, piecewise affine differential (PWA) equations, and a class of continuous, ordinary, differential equations' models derived from the PWA model. This hierarchy provides different levels of approximation of the biological system and, crucially, allows the use of theoretical tools to more exactly analyze and understand the mechanisms of the system. The Kai ABC oscillator, which is at the core of the cyanobacterial circadian rhythm, is analyzed as a case study, showing how several fundamental properties—order of oscillations, synchronization when mixing oscillating samples, structural robustness, and entrainment by external cues—can be obtained from basic mechanisms.

  4. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab; Meseguer, José

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability

  5. Novel calibration model maintenance strategy for solving the signal instability in quantitative liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Du, Hai-Li; Chen, Zeng-Ping; Song, Mi; Chen, Yao; Yu, Ru-Qin

    2014-04-18

    In this contribution, a multiplicative effects model with a parameter accounting for the variations in overall sensitivity over time was proposed to reduce the effects of signal instability on quantitative results of LC-MS/MS. This method allows the use of calibration models constructed from large standard sets without having to repeat their measurement even though variations occur in sensitivity and baseline signal intensity. The performance of the proposed method was tested on two proof-of-concept model systems: the determination of the target peptide in two sets of peptide digests mixtures and the quantification of melamine and metronidazole in two sets of milk powder samples. Experimental results confirmed that multiplicative effects model could provide quite satisfactory concentration predictions for both systems with average relative predictive error values far lower than the corresponding values of various models investigated in this paper. Considering its capability in solving the problem of signal instability across samples and over time in LC-MS/MS assays and its implementation simplicity, it is expected that the multiplicative effects model can be developed and extended in many application areas such as the quantification of specific protein in cells and human plasma and other complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Precise Quantitative Analysis of Probabilistic Business Process Model and Notation Workflows

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for modeling and analysis of real-world business workflows. We present a formalized core subset of the business process modeling and notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations...... the entire BPMN language, allow for more complex annotations and ultimately to automatically synthesize workflows by composing predefined subprocesses, in order to achieve a configuration that is optimal for parameters of interest....

  7. Comparing models for quantitative risk assessment: an application to the European Registry of foreign body injuries in children.

    Science.gov (United States)

    Berchialla, Paola; Scarinzi, Cecilia; Snidero, Silvia; Gregori, Dario

    2016-08-01

    Risk Assessment is the systematic study of decisions subject to uncertain consequences. An increasing interest has been focused on modeling techniques like Bayesian Networks since their capability of (1) combining in the probabilistic framework different type of evidence including both expert judgments and objective data; (2) overturning previous beliefs in the light of the new information being received and (3) making predictions even with incomplete data. In this work, we proposed a comparison among Bayesian Networks and other classical Quantitative Risk Assessment techniques such as Neural Networks, Classification Trees, Random Forests and Logistic Regression models. Hybrid approaches, combining both Classification Trees and Bayesian Networks, were also considered. Among Bayesian Networks, a clear distinction between purely data-driven approach and combination of expert knowledge with objective data is made. The aim of this paper consists in evaluating among this models which best can be applied, in the framework of Quantitative Risk Assessment, to assess the safety of children who are exposed to the risk of inhalation/insertion/aspiration of consumer products. The issue of preventing injuries in children is of paramount importance, in particular where product design is involved: quantifying the risk associated to product characteristics can be of great usefulness in addressing the product safety design regulation. Data of the European Registry of Foreign Bodies Injuries formed the starting evidence for risk assessment. Results showed that Bayesian Networks appeared to have both the ease of interpretability and accuracy in making prediction, even if simpler models like logistic regression still performed well. © The Author(s) 2013.

  8. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Science.gov (United States)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  9. Quantitative evaluation of muscle synergy models: a single-trial task decoding approach.

    Science.gov (United States)

    Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano

    2013-01-01

    Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space.

  10. Strategic municipal solid waste management: A quantitative model for Italian regions

    International Nuclear Information System (INIS)

    Cucchiella, Federica; D’Adamo, Idiano; Gastaldi, Massimo

    2014-01-01

    for future research in assessing quantitatively the effectiveness of waste management

  11. Fechner’s law in metacognition: a quantitative model of visual working memory confidence

    Science.gov (United States)

    van den Berg, Ronald; Yoo, Aspen H.; Ma, Wei Ji

    2016-01-01

    Although visual working memory (VWM) has been studied extensively, it is unknown how people form confidence judgments about their memories. Peirce (1878) speculated that Fechner’s law – which states that sensation is proportional to the logarithm of stimulus intensity – might apply to confidence reports. Based on this idea, we hypothesize that humans map the precision of their VWM contents to a confidence rating through Fechner’s law. We incorporate this hypothesis into the best available model of VWM encoding and fit it to data from a delayed-estimation experiment. The model provides an excellent account of human confidence rating distributions as well as the relation between performance and confidence. Moreover, the best-fitting mapping in a model with a highly flexible mapping closely resembles the logarithmic mapping, suggesting that no alternative mapping exists that accounts better for the data than Fechner's law. We propose a neural implementation of the model and find that this model also fits the behavioral data well. Furthermore, we find that jointly fitting memory errors and confidence ratings boosts the power to distinguish previously proposed VWM encoding models by a factor of 5.99 compared to fitting only memory errors. Finally, we show that Fechner's law also accounts for metacognitive judgments in a word recognition memory task, which is a first indication that it may be a general law in metacognition. Our work presents the first model to jointly account for errors and confidence ratings in VWM and could lay the groundwork for understanding the computational mechanisms of metacognition. PMID:28221087

  12. Quantitative physical models of volcanic phenomena for hazards assessment of critical infrastructures

    Science.gov (United States)

    Costa, Antonio

    2016-04-01

    Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.

  13. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  14. DFT and TD-DFT computation of charge transfer complex between o-phenylenediamine and 3,5-dinitrosalicylic acid

    International Nuclear Information System (INIS)

    Afroz, Ziya; Zulkarnain,; Ahmad, Afaq; Alam, Mohammad Jane; Faizan, Mohd; Ahmad, Shabbir

    2016-01-01

    DFT and TD-DFT studies of o-phenylenediamine (PDA), 3,5-dinitrosalicylic acid (DNSA) and their charge transfer complex have been carried out at B3LYP/6-311G(d,p) level of theory. Molecular geometry and various other molecular properties like natural atomic charges, ionization potential, electron affinity, band gap, natural bond orbital (NBO) and frontier molecular analysis have been presented at same level of theory. Frontier molecular orbital and natural bond orbital analysis show the charge delocalization from PDA to DNSA.

  15. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  16. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    Science.gov (United States)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  17. Quantitative genetics of Taura syndrome resistance in Pacific (Penaeus vannamei): A cure model approach

    DEFF Research Database (Denmark)

    Ødegård, Jørgen; Gitterle, Thomas; Madsen, Per

    2011-01-01

    cure survival model using Gibbs sampling, treating susceptibility and endurance as separate genetic traits. Results: Overall mortality at the end of test was 28%, while 38% of the population was considered susceptible to the disease. The estimated underlying heritability was high for susceptibility (0....... However, genetic evaluation of susceptibility based on the cure model showed clear associations with standard genetic evaluations that ignore the cure fraction for these data. Using the current testing design, genetic variation in observed survival time and absolute survival at the end of test were most...

  18. Quantitative Model of Price Diffusion and Market Friction Based on Trading as a Mechanistic Random Process

    Science.gov (United States)

    Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-01

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  19. A novel quantitative model of cell cycle progression based on cyclin-dependent kinases activity and population balances.

    Science.gov (United States)

    Pisu, Massimo; Concas, Alessandro; Cao, Giacomo

    2015-04-01

    Cell cycle regulates proliferative cell capacity under normal or pathologic conditions, and in general it governs all in vivo/in vitro cell growth and proliferation processes. Mathematical simulation by means of reliable and predictive models represents an important tool to interpret experiment results, to facilitate the definition of the optimal operating conditions for in vitro cultivation, or to predict the effect of a specific drug in normal/pathologic mammalian cells. Along these lines, a novel model of cell cycle progression is proposed in this work. Specifically, it is based on a population balance (PB) approach that allows one to quantitatively describe cell cycle progression through the different phases experienced by each cell of the entire population during its own life. The transition between two consecutive cell cycle phases is simulated by taking advantage of the biochemical kinetic model developed by Gérard and Goldbeter (2009) which involves cyclin-dependent kinases (CDKs) whose regulation is achieved through a variety of mechanisms that include association with cyclins and protein inhibitors, phosphorylation-dephosphorylation, and cyclin synthesis or degradation. This biochemical model properly describes the entire cell cycle of mammalian cells by maintaining a sufficient level of detail useful to identify check point for transition and to estimate phase duration required by PB. Specific examples are discussed to illustrate the ability of the proposed model to simulate the effect of drugs for in vitro trials of interest in oncology, regenerative medicine and tissue engineering. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Arms race between selfishness and policing: two-trait quantitative genetic model for caste fate conflict in eusocial Hymenoptera.

    Science.gov (United States)

    Dobata, Shigeto

    2012-12-01

    Policing against selfishness is now regarded as the main force maintaining cooperation, by reducing costly conflict in complex social systems. Although policing has been studied extensively in social insect colonies, its coevolution against selfishness has not been fully captured by previous theories. In this study, I developed a two-trait quantitative genetic model of the conflict between selfish immature females (usually larvae) and policing workers in eusocial Hymenoptera over the immatures' propensity to develop into new queens. This model allows for the analysis of coevolution between genomes expressed in immatures and workers that collectively determine the immatures' queen caste fate. The main prediction of the model is that a higher level of polyandry leads to a smaller fraction of queens produced among new females through caste fate policing. The other main prediction of the present model is that, as a result of arms race, caste fate policing by workers coevolves with exaggerated selfishness of the immatures achieving maximum potential to develop into queens. Moreover, the model can incorporate genetic correlation between traits, which has been largely unexplored in social evolution theory. This study highlights the importance of understanding social traits as influenced by the coevolution of conflicting genomes. © 2012 The Author. Evolution© 2012 The Society for the Study of Evolution.

  1. Detection and quantitation of circulating tumor cell dynamics by bioluminescence imaging in an orthotopic mammary carcinoma model.

    Directory of Open Access Journals (Sweden)

    Laura Sarah Sasportas

    Full Text Available Circulating tumor cells (CTCs have been detected in the bloodstream of both early-stage and advanced cancer patients. However, very little is know about the dynamics of CTCs during cancer progression and the clinical relevance of longitudinal CTC enumeration. To address this, we developed a simple bioluminescence imaging assay to detect CTCs in mouse models of metastasis. In a 4T1 orthotopic metastatic mammary carcinoma mouse model, we demonstrated that this quantitative method offers sensitivity down to 2 CTCs in 0.1-1mL blood samples and high specificity for CTCs originating from the primary tumor, independently of their epithelial status. In this model, we simultaneously monitored blood CTC dynamics, primary tumor growth, and lung metastasis progression over the course of 24 days. Early in tumor development, we observed low numbers of CTCs in blood samples (10-15 cells/100 µL and demonstrated that CTC dynamics correlate with viable primary tumor growth. To our knowledge, these data represent the first reported use of bioluminescence imaging to detect CTCs and quantify their dynamics in any cancer mouse model. This new assay is opening the door to the study of CTC dynamics in a variety of animal models. These studies may inform clinical decision on the appropriate timing of blood sampling and value of longitudinal CTC enumeration in cancer patients.

  2. Using the ACT-R architecture to specify 39 quantitative process models of decision making

    NARCIS (Netherlands)

    Marewski, Julian N.; Mehlhorn, Katja

    Hypotheses about decision processes are often formulated qualitatively and remain silent about the interplay of decision, memorial, and other cognitive processes. At the same time, existing decision models are specified at varying levels of detail, making it difficult to compare them. We provide a

  3. Quantitative analysis of large amounts of journalistic texts using topic modelling

    NARCIS (Netherlands)

    Jacobi, C.; van Atteveldt, W.H.; Welbers, K.

    2016-01-01

    The huge collections of news content which have become available through digital technologies both enable and warrant scientific inquiry, challenging journalism scholars to analyse unprecedented amounts of texts. We propose Latent Dirichlet Allocation (LDA) topic modelling as a tool to face this

  4. Quantitative Analysis of Nanoparticle Transport through in Vitro Blood-Brain Barrier Models

    NARCIS (Netherlands)

    Åberg, Christoffer

    2016-01-01

    Nanoparticle transport through the blood-brain barrier has received much attention of late, both from the point of view of nano-enabled drug delivery, as well as due to concerns about unintended exposure of nanomaterials to humans and other organisms. In vitro models play a lead role in efforts to

  5. On the computations analyzing natural optic flow : Quantitative model analysis of the blowfly motion vision pathway

    NARCIS (Netherlands)

    Lindemann, J.P.; Kern, R.; Hateren, J.H. van; Ritter, H.; Egelhaaf, M.

    2005-01-01

    For many animals, including humans, the optic flow generated on the eyes during locomotion is an important source of information about self-motion and the structure of the environment. The blowfly has been used frequently as a model system for experimental analysis of optic flow processing at the

  6. Quantitative analysis of multiple biokinetic models using a dynamic water phantom: A feasibility study

    Science.gov (United States)

    Chiang, Fu-Tsai; Li, Pei-Jung; Chung, Shih-Ping; Pan, Lung-Fa; Pan, Lung-Kwang

    2016-01-01

    ABSTRACT This study analyzed multiple biokinetic models using a dynamic water phantom. The phantom was custom-made with acrylic materials to model metabolic mechanisms in the human body. It had 4 spherical chambers of different sizes, connected by 8 ditches to form a complex and adjustable water loop. One infusion and drain pole connected the chambers to an auxiliary silicon-based hose, respectively. The radio-active compound solution (TC-99m-MDP labeled) formed a sealed and static water loop inside the phantom. As clean feed water was infused to replace the original solution, the system mimicked metabolic mechanisms for data acquisition. Five cases with different water loop settings were tested and analyzed, with case settings changed by controlling valve poles located in the ditches. The phantom could also be changed from model A to model B by transferring its vertical configuration. The phantom was surveyed with a clinical gamma camera to determine the time-dependent intensity of every chamber. The recorded counts per pixel in each chamber were analyzed and normalized to compare with theoretical estimations from the MATLAB program. Every preset case was represented by uniquely defined, time-dependent, simultaneous differential equations, and a corresponding MATLAB program optimized the solutions by comparing theoretical calculations and practical measurements. A dimensionless agreement (AT) index was recommended to evaluate the comparison in each case. ATs varied from 5.6 to 48.7 over the 5 cases, indicating that this work presented an acceptable feasibility study. PMID:27286096

  7. A Quantitative Research Study on the Implementation of the Response-to-Intervention Model

    Science.gov (United States)

    Mahoney, Jamie

    2011-01-01

    Response to Intervention (RTI) emerged as a new service delivery model designed to meet the learning needs of all students prior to diagnosis and placement in the special education setting. The problem was few research studies had been conducted between general education teachers with intensive professional development and those without…

  8. Quantitative modelling to estimate the transfer of pharmaceuticals through the food production system

    NARCIS (Netherlands)

    Chitescu, C.L.; Nicolau, A.I.; Romkens, P.F.A.M.; Fels-Klerx, van der H.J.

    2014-01-01

    Use of pharmaceuticals in animal production may cause an indirect route of contamination of food products of animal origin. This study aimed to assess, through mathematical modelling, the transfer of pharmaceuticals from contaminated soil, through plant uptake, into the dairy food production chain.

  9. Complementary three-dimensional quantitative structure-activity relationship modeling of binding affinity and functional potency

    DEFF Research Database (Denmark)

    Tosco, Paolo; Ahring, Philip K; Dyhring, Tino

    2009-01-01

    Complementary 3D-QSAR modeling of binding affinity and functional potency is proposed as a tool to pinpoint the molecular features of the ligands, and the corresponding amino acids in the receptor, responsible for high affinity binding vs those driving agonist behavior and receptor activation. Th...

  10. Quantitative coating thickness determination using a coefficient-independent hyperspectral scattering model

    NARCIS (Netherlands)

    Dingemans, LM; Papadakis, V.; Liu, P.; Adam, A.J.L.; Groves, R.M.

    2017-01-01

    Background
    Hyperspectral imaging is a technique that enables the mapping of spectral signatures across a surface. It is most commonly used for surface chemical mapping in fields as diverse as satellite remote sensing, biomedical imaging and heritage science. Existing models, such as the

  11. Numerical modelling for quantitative environmental risk assessment for the disposal of drill cuttings and mud

    Science.gov (United States)

    Wahab, Mohd Amirul Faiz Abdul; Shaufi Sokiman, Mohamad; Parsberg Jakobsen, Kim

    2017-10-01

    To investigate the fate of drilling waste and their impacts towards surrounding environment, numerical models were generated using an environmental software; MIKE by DHI. These numerical models were used to study the transportation of suspended drill waste plumes in the water column and its deposition on seabed in South China Sea (SCS). A random disposal site with the model area of 50 km × 25 km was selected near the Madalene Shoal in SCS and the ambient currents as well as other meteorological conditions were simulated in details at the proposed location. This paper was focusing on sensitivity study of different drill waste particle characteristics on impacts towards marine receiving environment. The drilling scenarios were obtained and adapted from the oil producer well at offshore Sabah (Case 1) and data from actual exploration drilling case at Pumbaa location (PL 469) in the Norwegian Sea (Case 2). The two cases were compared to study the effect of different drilling particle characteristics and their behavior in marine receiving environment after discharged. Using the Hydrodynamic and Sediment Transport models simulated in MIKE by DHI, the variation of currents and the behavior of the drilling waste particles can be analyzed and evaluated in terms of multiple degree zones of impacts.

  12. Semi-Local DFT Functionals with Exact-Exchange-Like Features: Beyond the AK13

    Science.gov (United States)

    Armiento, Rickard

    The Armiento-Kümmel functional from 2013 (AK13) is a non-empirical semi-local exchange functional on generalized gradient approximation form (GGA) in Kohn-Sham (KS) density functional theory (DFT). Recent works have established that AK13 gives improved electronic-structure exchange features over other semi-local methods, with a qualitatively improved orbital description and band structure. For example, the Kohn-Sham band gap is greatly extended, as it is for exact exchange. This talk outlines recent efforts towards new exchange-correlation functionals based on, and extending, the AK13 design ideas. The aim is to improve the quantitative accuracy, the description of energetics, and to address other issues found with the original formulation. Swedish e-Science Research Centre (SeRC).

  13. Quantitative modelling of the closure of meso-scale parallel currents in the nightside ionosphere

    Directory of Open Access Journals (Sweden)

    A. Marchaudon

    2004-01-01

    Full Text Available On 12 January 2000, during a northward IMF period, two successive conjunctions occur between the CUTLASS SuperDARN radar pair and the two satellites Ørsted and FAST. This situation is used to describe and model the electrodynamic of a nightside meso-scale arc associated with a convection shear. Three field-aligned current sheets, one upward and two downward on both sides, are observed. Based on the measurements of the parallel currents and either the conductance or the electric field profile, a model of the ionospheric current closure is developed along each satellite orbit. This model is one-dimensional, in a first attempt and a two-dimensional model is tested for the Ørsted case. These models allow one to quantify the balance between electric field gradients and ionospheric conductance gradients in the closure of the field-aligned currents. These radar and satellite data are also combined with images from Polar-UVI, allowing for a description of the time evolution of the arc between the two satellite passes. The arc is very dynamic, in spite of quiet solar wind conditions. Periodic enhancements of the convection and of electron precipitation associated with the arc are observed, probably associated with quasi-periodic injections of particles due to reconnection in the magnetotail. Also, a northward shift and a reorganisation of the precipitation pattern are observed, together with a southward shift of the convection shear. Key words. Ionosphere (auroral ionosphere; electric fields and currents; particle precipitation – Magnetospheric physics (magnetosphere-ionosphere interactions

  14. Quantitative Thermochronology

    Science.gov (United States)

    Braun, Jean; van der Beek, Peter; Batt, Geoffrey

    2006-05-01

    Thermochronology, the study of the thermal history of rocks, enables us to quantify the nature and timing of tectonic processes. Quantitative Thermochronology is a robust review of isotopic ages, and presents a range of numerical modeling techniques to allow the physical implications of isotopic age data to be explored. The authors provide analytical, semi-analytical, and numerical solutions to the heat transfer equation in a range of tectonic settings and under varying boundary conditions. They then illustrate their modeling approach built around a large number of case studies. The benefits of different thermochronological techniques are also described. Computer programs on an accompanying website at www.cambridge.org/9780521830577 are introduced through the text and provide a means of solving the heat transport equation in the deforming Earth to predict the ages of rocks and compare them directly to geological and geochronological data. Several short tutorials, with hints and solutions, are also included. Numerous case studies help geologists to interpret age data and relate it to Earth processes Essential background material to aid understanding and using thermochronological data Provides a thorough treatise on numerical modeling of heat transport in the Earth's crust Supported by a website hosting relevant computer programs and colour slides of figures from the book for use in teaching

  15. Quantitative Raman characterization of cross-linked collagen thin films as a model system for diagnosing early osteoarthritis

    Science.gov (United States)

    Wang, Chao; Durney, Krista M.; Fomovsky, Gregory; Ateshian, Gerard A.; Vukelic, Sinisa

    2016-03-01

    The onset of osteoarthritis (OA)in articular cartilage is characterized by degradation of extracellular matrix (ECM). Specifically, breakage of cross-links between collagen fibrils in the articular cartilage leads to loss of structural integrity of the bulk tissue. Since there are no broadly accepted, non-invasive, label-free tools for diagnosing OA at its early stage, Raman spectroscopyis therefore proposed in this work as a novel, non-destructive diagnostic tool. In this study, collagen thin films were employed to act as a simplified model system of the cartilage collagen extracellular matrix. Cross-link formation was controlled via exposure to glutaraldehyde (GA), by varying exposure time and concentration levels, and Raman spectral information was collected to quantitatively characterize the cross-link assignments imparted to the collagen thin films during treatment. A novel, quantitative method was developed to analyze the Raman signal obtained from collagen thin films. Segments of Raman signal were decomposed and modeled as the sum of individual bands, providing an optimization function for subsequent curve fitting against experimental findings. Relative changes in the concentration of the GA-induced pyridinium cross-links were extracted from the model, as a function of the exposure to GA. Spatially resolved characterization enabled construction of spectral maps of the collagen thin films, which provided detailed information about the variation of cross-link formation at various locations on the specimen. Results showed that Raman spectral data correlate with glutaraldehyde treatment and therefore may be used as a proxy by which to measure loss of collagen cross-links in vivo. This study proposes a promising system of identifying onset of OA and may enable early intervention treatments that may serve to slow or prevent osteoarthritis progression.

  16. The DFT-DVM theoretical study of the differences of quadrupole splitting and the iron electronic structure for the rough heme models for {alpha}- and {beta}-subunits in deoxyhemoglobin and for deoxymyoglobin

    Energy Technology Data Exchange (ETDEWEB)

    Yuryeva, E. I. [Institute of Solid State Chemistry of the Ural Branch of the Russian Academy of Sciences (Russian Federation); Oshtrakh, M. I., E-mail: oshtrakh@mail.utnet.ru [Ural State Technical University-UPI, Faculty of Physical Techniques and Devices for Quality Control (Russian Federation)

    2008-01-15

    Quantum chemical calculations of the iron electron structure and {sup 57}Fe quadrupole splitting were made by density functional theory and X{alpha} discrete variation method for the rough heme models for {alpha}- and {beta}-subunits in deoxyhemoglobin and for deoxymyoglobin accounting stereochemical differences of the active sites in native proteins. The calculations revealed differences of quadrupole splitting temperature dependences for three models indicating sensitivity of quadrupole splitting and Fe(II) electronic structure to small variations of iron stereochemistry.

  17. CHANNEL ESTIMATION FOR ZT DFT-s-OFDM

    DEFF Research Database (Denmark)

    2018-01-01

    A signal modulated according to zero-tail discrete Fourier transform spread orthogonal frequency division multiplexing (ZT DFT-s-OFDM) is received over a channel. The signal is down-sampled into a first sequence comprising N samples, N corresponding to the number of used subcarriers. The first Nh...

  18. DFT computations of the lattice constant, stable atomic structure and ...

    African Journals Online (AJOL)

    This paper presents the most stable atomic structure and lattice constant of Fullerenes (C60). FHI-aims DFT code was used to predict the stable structure and the computational lattice constant of C60. These were compared with known experimental structures and lattice constants of C60. The results obtained showed that ...

  19. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    Science.gov (United States)

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  20. Inhibition of bacterial conjugation by phage M13 and its protein g3p: quantitative analysis and model.

    Directory of Open Access Journals (Sweden)

    Abraham Lin

    Full Text Available Conjugation is the main mode of horizontal gene transfer that spreads antibiotic resistance among bacteria. Strategies for inhibiting conjugation may be useful for preserving the effectiveness of antibiotics and preventing the emergence of bacterial strains with multiple resistances. Filamentous bacteriophages were first observed to inhibit conjugation several decades ago. Here we investigate the mechanism of inhibition and find that the primary effect on conjugation is occlusion of the conjugative pilus by phage particles. This interaction is mediated primarily by phage coat protein g3p, and exogenous addition of the soluble fragment of g3p inhibited conjugation at low nanomolar concentrations. Our data are quantitatively consistent with a simple model in which association between the pili and phage particles or g3p prevents transmission of an F plasmid encoding tetracycline resistance. We also observe a decrease in the donor ability of infected cells, which is quantitatively consistent with a reduction in pili elaboration. Since many antibiotic-resistance factors confer susceptibility to phage infection through expression of conjugative pili (the receptor for filamentous phage, these results suggest that phage may be a source of soluble proteins that slow the spread of antibiotic resistance genes.

  1. AI/OR computational model for integrating qualitative and quantitative design methods

    Science.gov (United States)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  2. MetabR: an R script for linear model analysis of quantitative metabolomic data

    Directory of Open Access Journals (Sweden)

    Ernest Ben

    2012-10-01

    Full Text Available Abstract Background Metabolomics is an emerging high-throughput approach to systems biology, but data analysis tools are lacking compared to other systems level disciplines such as transcriptomics and proteomics. Metabolomic data analysis requires a normalization step to remove systematic effects of confounding variables on metabolite measurements. Current tools may not correctly normalize every metabolite when the relationships between each metabolite quantity and fixed-effect confounding variables are different, or for the effects of random-effect confounding variables. Linear mixed models, an established methodology in the microarray literature, offer a standardized and flexible approach for removing the effects of fixed- and random-effect confounding variables from metabolomic data. Findings Here we present a simple menu-driven program, “MetabR”, designed to aid researchers with no programming background in statistical analysis of metabolomic data. Written in the open-source statistical programming language R, MetabR implements linear mixed models to normalize metabolomic data and analysis of variance (ANOVA to test treatment differences. MetabR exports normalized data, checks statistical model assumptions, identifies differentially abundant metabolites, and produces output files to help with data interpretation. Example data are provided to illustrate normalization for common confounding variables and to demonstrate the utility of the MetabR program. Conclusions We developed MetabR as a simple and user-friendly tool for implementing linear mixed model-based normalization and statistical analysis of targeted metabolomic data, which helps to fill a lack of available data analysis tools in this field. The program, user guide, example data, and any future news or updates related to the program may be found at http://metabr.r-forge.r-project.org/.

  3. Placebo Response is Driven by UCS Revaluation: Evidence, Neurophysiological Consequences and a Quantitative Model

    OpenAIRE

    Luca Puviani; Sidita Rama

    2016-01-01

    Despite growing scientific interest in the placebo effect and increasing understanding of neurobiological mechanisms, theoretical modeling of the placebo response remains poorly developed. The most extensively accepted theories are expectation and conditioning, involving both conscious and unconscious information processing. However, it is not completely understood how these mechanisms can shape the placebo response. We focus here on neural processes which can account for key properties of th...

  4. Quantitative modeling of flavonoid glycosides isolated from Paliurus spina-christi Mill.

    OpenAIRE

    Medić-Šarić, Marica; Maleš, Željan; Šarić, Slavko; Brantner, Adelheid

    1996-01-01

    Several QSPR models for predicting the properties of flavonoid glycosides isolated from Paliurus spina-christi Mill, and of some related flavonoids were described and evaluated. Log P values for all of them were calculated according to the method of Rekker. All investigated flavonoids showed expressive hydrophobicity. Significant correlation between the partition coefficient, log P, and van der Waals volume, Vw (calculated according to the method described by Moriguchi et al.) was obtained. T...

  5. Longitudinal Multiplexed Measurement of Quantitative Proteomic Signatures in Mouse Lymphoma Models Using Magneto-Nanosensors.

    Science.gov (United States)

    Lee, Jung-Rok; Appelmann, Iris; Miething, Cornelius; Shultz, Tyler O; Ruderman, Daniel; Kim, Dokyoon; Mallick, Parag; Lowe, Scott W; Wang, Shan X

    2018-01-01

    Cancer proteomics is the manifestation of relevant biological processes in cancer development. Thus, it reflects the activities of tumor cells, host-tumor interactions, and systemic responses to cancer therapy. To understand the causal effects of tumorigenesis or therapeutic intervention, longitudinal studies are greatly needed. However, most of the conventional mouse experiments are unlikely to accommodate frequent collection of serum samples with a large enough volume for multiple protein assays towards single-object analysis. Here, we present a technique based on magneto-nanosensors to longitudinally monitor the protein profiles in individual mice of lymphoma models using a small volume of a sample for multiplex assays. Methods: Drug-sensitive and -resistant cancer cell lines were used to develop the mouse models that render different outcomes upon the drug treatment. Two groups of mice were inoculated with each cell line, and treated with either cyclophosphamide or vehicle solution. Serum samples taken longitudinally from each mouse in the groups were measured with 6-plex magneto-nanosensor cytokine assays. To find the origin of IL-6, experiments were performed using IL-6 knock-out mice. Results: The differences in serum IL-6 and GCSF levels between the drug-treated and untreated groups were revealed by the magneto-nanosensor measurement on individual mice. Using the multiplex assays and mouse models, we found that IL-6 is secreted by the host in the presence of tumor cells upon the drug treatment. Conclusion: The multiplex magneto-nanosensor assays enable longitudinal proteomic studies on mouse tumor models to understand tumor development and therapy mechanisms more precisely within a single biological object.

  6. Quantitative modeling of the third harmonic emission spectrum of plasmonic nanoantennas.

    Science.gov (United States)

    Hentschel, Mario; Utikal, Tobias; Giessen, Harald; Lippitz, Markus

    2012-07-11

    Plasmonic dimer nanoantennas are characterized by a strong enhancement of the optical field, leading to large nonlinear effects. The third harmonic emission spectrum thus depends strongly on the antenna shape and size as well as on its gap size. Despite the complex shape of the nanostructure, we find that for a large range of different geometries the nonlinear spectral properties are fully determined by the linear response of the antenna. We find excellent agreement between the measured spectra and predictions from a simple nonlinear oscillator model. We extract the oscillator parameters from the linear spectrum and use the amplitude of the nonlinear perturbation only as scaling parameter of the third harmonic spectra. Deviations from the model only occur for gap sizes below 20 nm, indicating that only for these small distances the antenna hot spot contributes noticeable to the third harmonic generation. Because of its simplicity and intuitiveness, our model allows for the rational design of efficient plasmonic nonlinear light sources and is thus crucial for the design of future plasmonic devices that give substantial enhancement of nonlinear processes such as higher harmonics generation as well as difference frequency mixing for plasmonically enhanced terahertz generation.

  7. Influences on decision-making for undergoing plastic surgery: a mental models and quantitative assessment.

    Science.gov (United States)

    Darisi, Tanya; Thorne, Sarah; Iacobelli, Carolyn

    2005-09-01

    Research was conducted to gain insight into potential clients' decisions to undergo plastic surgery, their perception of benefits and risks, their judgment of outcomes, and their selection of a plastic surgeon. Semistructured, open-ended interviews were conducted with 60 people who expressed interest in plastic surgery. Qualitative analysis revealed their "mental models" regarding influences on their decision to undergo plastic surgery and their choice of a surgeon. Interview results were used to design a Web-based survey in which 644 individuals considering plastic surgery responded. The desire for change was the most direct motivator to undergo plastic surgery. Improvements to physical well-being were related to emotional and social benefits. When prompted about risks, participants mentioned physical, emotional, and social risks. Surgeon selection was a critical influence on decisions to undergo plastic surgery. Participants gave considerable weight to personal consultation and believed that finding the "right" plastic surgeon would minimize potential risks. Findings from the Web-based survey were similar to the mental models interviews in terms of benefit ratings but differed in risk ratings and surgeon selection criteria. The mental models interviews revealed that interview participants were thoughtful about their decision to undergo plastic surgery and focused on finding the right plastic surgeon.

  8. Quantitative model of transport-aperture coordination during reach-to-grasp movements.

    Science.gov (United States)

    Rand, Miya K; Shimansky, Y P; Hossain, Abul B M I; Stelmach, George E

    2008-06-01

    It has been found in our previous studies that the initiation of aperture closure during reach-to-grasp movements occurs when the hand distance to target crosses a threshold that is a function of peak aperture amplitude, hand velocity, and hand acceleration. Thus, a stable relationship between those four movement parameters is observed at the moment of aperture closure initiation. Based on the concept of optimal control of movements (Naslin 1969) and its application for reach-to-grasp movement regulation (Hoff and Arbib 1993), it was hypothesized that the mathematical equation expressing that relationship can be generalized to describe coordination between hand transport and finger aperture during the entire reach-to-grasp movement by adding aperture velocity and acceleration to the above four movement parameters. The present study examines whether this hypothesis is supported by the data obtained in experiments in which young adults performed reach-to-grasp movements in eight combinations of two reach-amplitude conditions and four movement-speed conditions. It was found that linear approximation of the mathematical model described the relationship among the six movement parameters for the entire aperture-closure phase with very high precision for each condition, thus supporting the hypothesis for that phase. Testing whether one mathematical model could approximate the data across all the experimental conditions revealed that it was possible to achieve the same high level of data-fitting precision only by including in the model two additional, condition-encoding parameters and using a nonlinear, artificial neural network-based approximator with two hidden layers comprising three and two neurons, respectively. This result indicates that transport-aperture coordination, as a specific relationship between the parameters of hand transport and finger aperture, significantly depends on the condition-encoding variables. The data from the aperture-opening phase also fit a

  9. Quantitative modelling of the degradation processes of cement grout. Project CEMMOD

    Energy Technology Data Exchange (ETDEWEB)

    Grandia, Fidel; Galindez, Juan-Manuel; Arcos, David; Molinero, Jorge (Amphos21 Consulting S.L., Barcelona (Spain))

    2010-05-15

    Grout cement is planned to be used in the sealing of water-conducting fractures in the deep geological storage of spent nuclear fuel waste. The integrity of such cementitious materials should be ensured in a time framework of decades to a hundred of years as mimum. However, their durability must be quantified since grout degradation may jeopardize the stability of other components in the repository due to the potential release of hyperalkaline plumes. The model prediction of the cement alteration has been challenging in the last years mainly due to the difficulty to reproduce the progressive change in composition of the Calcium-Silicate-Hydrate (CSH) compounds as the alteration proceeds. In general, the data obtained from laboratory experiments show a rather similar dependence between the pH of pore water and the Ca-Si ratio of the CSH phases. The Ca-Si ratio decreases as the CSH is progressively replaced by Si-enriched phases. An elegant and reasonable approach is the use of solid solution models even keeping in mind that CSH phases are not crystalline solids but gels. An additional obstacle is the uncertainty in the initial composition of the grout to be considered in the calculations because only the recipe of low-pH clinker is commonly provided by the manufacturer. The hydration process leads to the formation of new phases and, importantly, creates porosity. A number of solid solution models have been reported in literature. Most of them assumed a strong non-ideal binary solid solution series to account for the observed changes in the Ca-Si ratios in CSH. However, it results very difficult to reproduce the degradation of the CSH in the whole Ca-Si range of compositions (commonly Ca/Si=0.5-2.5) by considering only two end-members and fixed nonideality parameters. Models with multiple non-ideal end-members with interaction parameters as a function of the solid composition can solve the problem but these can not be managed in the existing codes of reactive

  10. Modeling the Dispersibility of Single Walled Carbon Nanotubes in Organic Solvents by Quantitative Structure-Activity Relationship Approach

    Science.gov (United States)

    Yilmaz, Hayriye; Rasulev, Bakhtiyor; Leszczynski, Jerzy

    2015-01-01

    The knowledge of physico-chemical properties of carbon nanotubes, including behavior in organic solvents is very important for design, manufacturing and utilizing of their counterparts with improved properties. In the present study a quantitative structure-activity/property relationship (QSAR/QSPR) approach was applied to predict the dispersibility of single walled carbon nanotubes (SWNTs) in various organic solvents. A number of additive descriptors and quantum-chemical descriptors were calculated and utilized to build QSAR models. The best predictability is shown by a 4-variable model. The model showed statistically good results (R2training = 0.797, Q2 = 0.665, R2test = 0.807), with high internal and external correlation coefficients. Presence of the X0Av descriptor and its negative term suggest that small size solvents have better SWCNTs solubility. Mass weighted descriptor ATS6m also indicates that heavier solvents (and small in size) most probably are better solvents for SWCNTs. The presence of the Dipole Z descriptor indicates that higher polarizability of the solvent molecule increases the solubility. The developed model and contributed descriptors can help to understand the mechanism of the dispersion process and predictorganic solvents that improve the dispersibility of SWNTs. PMID:28347035

  11. Towards the Development of Global Nano-Quantitative Structure–Property Relationship Models: Zeta Potentials of Metal Oxide Nanoparticles

    Directory of Open Access Journals (Sweden)

    Andrey A. Toropov

    2018-04-01

    Full Text Available Zeta potential indirectly reflects a charge of the surface of nanoparticles in solutions and could be used to represent the stability of the colloidal solution. As processes of synthesis, testing and evaluation of new nanomaterials are expensive and time-consuming, so it would be helpful to estimate an approximate range of properties for untested nanomaterials using computational modeling. We collected the largest dataset of zeta potential measurements of bare metal oxide nanoparticles in water (87 data points. The dataset was used to develop quantitative structure–property relationship (QSPR models. Essential features of nanoparticles were represented using a modified simplified molecular input line entry system (SMILES. SMILES strings reflected the size-dependent behavior of zeta potentials, as the considered quasi-SMILES modification included information about both chemical composition and the size of the nanoparticles. Three mathematical models were generated using the Monte Carlo method, and their statistical quality was evaluated (R2 for the training set varied from 0.71 to 0.87; for the validation set, from 0.67 to 0.82; root mean square errors for both training and validation sets ranged from 11.3 to 17.2 mV. The developed models were analyzed and linked to aggregation effects in aqueous solutions.

  12. Modeling the Dispersibility of Single Walled Carbon Nanotubes in Organic Solvents by Quantitative Structure-Activity Relationship Approach

    Directory of Open Access Journals (Sweden)

    Hayriye Yilmaz

    2015-05-01

    Full Text Available The knowledge of physico-chemical properties of carbon nanotubes, including behavior in organic solvents is very important for design, manufacturing and utilizing of their counterparts with improved properties. In the present study a quantitative structure-activity/property relationship (QSAR/QSPR approach was applied to predict the dispersibility of single walled carbon nanotubes (SWNTs in various organic solvents. A number of additive descriptors and quantum-chemical descriptors were calculated and utilized to build QSAR models. The best predictability is shown by a 4-variable model. The model showed statistically good results (R2training = 0.797, Q2 = 0.665, R2test = 0.807, with high internal and external correlation coefficients. Presence of the X0Av descriptor and its negative term suggest that small size solvents have better SWCNTs solubility. Mass weighted descriptor ATS6m also indicates that heavier solvents (and small in size most probably are better solvents for SWCNTs. The presence of the Dipole Z descriptor indicates that higher polarizability of the solvent molecule increases the solubility. The developed model and contributed descriptors can help to understand the mechanism of the dispersion process and predictorganic solvents that improve the dispersibility of SWNTs.

  13. Quantitative study of Portland cement hydration by X-Ray diffraction/Rietveld analysis and geochemical modeling

    Science.gov (United States)

    Coutelot, F.; Seaman, J. C.; Simner, S.

    2017-12-01

    In this study the hydration of Portland cements containing blast-furnace slag and type V fly ash were investigated during cement curing using X-ray diffraction, with geochemical modeling used to calculate the total volume of hydrates. The goal was to evaluate the relationship between the starting component levels and the hydrate assemblages that develop during the curing process. Blast furnace-slag levels of 60, 45 and 30 wt.% were studied in blends containing fly ash and Portland cement. Geochemical modelling described the dissolution of the clinker, and predicted quantitatively the amount of hydrates. In all cases the experiments showed the presence of C-S-H, portlandite and ettringite. The quantities of ettringite, portlandite and the amorphous phases as determined by XRD agreed well with the calculated amounts of these phases after different periods of time. These findings show that changes in the bulk composition of hydrating cements can be described by geochemical models. Such a comparison between experimental and modelled data helps to understand in more detail the active processes occurring during cement hydration.

  14. Quantitative Myocardial Perfusion with Dynamic Contrast-Enhanced Imaging in MRI and CT: Theoretical Models and Current Implementation

    Directory of Open Access Journals (Sweden)

    G. J. Pelgrim

    2016-01-01

    Full Text Available Technological advances in magnetic resonance imaging (MRI and computed tomography (CT, including higher spatial and temporal resolution, have made the prospect of performing absolute myocardial perfusion quantification possible, previously only achievable with positron emission tomography (PET. This could facilitate integration of myocardial perfusion biomarkers into the current workup for coronary artery disease (CAD, as MRI and CT systems are more widely available than PET scanners. Cardiac PET scanning remains expensive and is restricted by the requirement of a nearby cyclotron. Clinical evidence is needed to demonstrate that MRI and CT have similar accuracy for myocardial perfusion quantification as PET. However, lack of standardization of acquisition protocols and tracer kinetic model selection complicates comparison between different studies and modalities. The aim of this overview is to provide insight into the different tracer kinetic models for quantitative myocardial perfusion analysis and to address typical implementation issues in MRI and CT. We compare different models based on their theoretical derivations and present the respective consequences for MRI and CT acquisition parameters, highlighting the interplay between tracer kinetic modeling and acquisition settings.

  15. Hazard Response Modeling Uncertainty (A Quantitative Method). Volume 2. Evaluation of Commonly Used Hazardous Gas Dispersion Models