Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M
1991-05-01
The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.
Fulton, John L; Bylaska, Eric J; Bogatko, Stuart; Balasubramanian, Mahalingam; Cauët, Emilie; Schenter, Gregory K; Weare, John H
2012-09-20
First-principles dynamics simulations (DFT, PBE96, and PBE0) and electron scattering calculations (FEFF9) provide near-quantitative agreement with new and existing XAFS measurements for a series of transition-metal ions interacting with their hydration shells via complex mechanisms (high spin, covalency, charge transfer, etc.). This analysis does not require either the development of empirical interparticle interaction potentials or structural models of hydration. However, it provides consistent parameter-free analysis and improved agreement with the higher-R scattering region (first- and second-shell structure, symmetry, dynamic disorder, and multiple scattering) for this comprehensive series of ions. DFT+GGA MD methods provide a high level of agreement. However, improvements are observed when exact exchange is included. Higher accuracy in the pseudopotential description of the atomic potential, including core polarization and reducing core radii, was necessary for very detailed agreement. The first-principles nature of this approach supports its application to more complex systems.
Ruf, Jacob; Nowadnick, Elizabeth; Park, Hyowon; King, Philip; Millis, Andrew; Schlom, Darrell; Shen, Kyle
Careful exploration of the phase space available for artificially engineering emergent electronic properties in epitaxial thin films and superlattices of transition-metal oxides requires close feedback between materials synthesis, experimental characterization of both electronic and atomic structures, and modeling based on advanced computational methods. Here we apply this general strategy to the perovskite rare-earth nickelate LaNiO3, using molecular-beam epitaxy to synthesize thin films, performing in situangle-resolved photoemission spectroscopy (ARPES) and low-energy electron diffraction (LEED) measurements, and comparing our results with the predictions of density functional theory plus dynamical mean-field theory (DFT +DMFT). Our study establishes LaNiO3 as a moderately correlated metal in which the quasiparticle mass enhancement can be modeled with quantitative accuracy by DFT +DMFT. Finally, in view of efforts to produce eg orbital polarization in nickelate heterostructures as a means of mimicking single-band cuprate-like physics, we discuss the extent to which our ARPES and LEED results suggest that such effects are intrinsically present at film surfaces due to the existence of polar distortions, as reported by coherent Bragg rod analysis of surface x-ray diffraction.
Modelling Catalyst Surfaces Using DFT Cluster Calculations
Directory of Open Access Journals (Sweden)
Oliver Kröcher
2009-09-01
Full Text Available We review our recent theoretical DFT cluster studies of a variety of industrially relevant catalysts such as TiO2, γ-Al2O3, V2O5-WO3-TiO2 and Ni/Al2O3. Aspects of the metal oxide surface structure and the stability and structure of metal clusters on the support are discussed as well as the reactivity of surfaces, including their behaviour upon poisoning. It is exemplarily demonstrated how such theoretical considerations can be combined with DRIFT and XPS results from experimental studies.
Modelling catalyst surfaces using DFT cluster calculations.
Czekaj, Izabela; Wambach, Jörg; Kröcher, Oliver
2009-11-20
We review our recent theoretical DFT cluster studies of a variety of industrially relevant catalysts such as TiO(2), gamma-Al(2)O(3), V(2)O(5)-WO(3)-TiO(2) and Ni/Al(2)O(3). Aspects of the metal oxide surface structure and the stability and structure of metal clusters on the support are discussed as well as the reactivity of surfaces, including their behaviour upon poisoning. It is exemplarily demonstrated how such theoretical considerations can be combined with DRIFT and XPS results from experimental studies.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
Energy Technology Data Exchange (ETDEWEB)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K. [Univ. of Nebraska Medical Center, Omaha, NE (United States)
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.
Energy Technology Data Exchange (ETDEWEB)
Williams, Stephen D.; Johnson, Timothy J.; Sharpe, Steven W.; Yavelak, Veronica; Oats, R. P.; Brauer, Carolyn S.
2013-11-13
Recently recorded quantitative IR spectra of a variety of gas-phase alkanes are shown to have integrated intensities in both the C-H stretching and C-H bending regions that depend linearly on the molecular size, i.e. the number of C-H bonds. This result is well predicted from CH4 to C15H32 by DFT computations of IR spectra at the B3LYP/6-31+G(d,p) level of DFT theory. A simple model predicting the absolute IR band intensities of alkanes based only on structural formula is proposed: For the C-H stretching band near 2930 cm-1 this is given by (in km/mol): CH¬_str = (34±3)*CH – (41±60) where CH is number of C-H bonds in the alkane. The linearity is explained in terms of coordinated motion of methylene groups rather than the summed intensities of autonomous -CH2- units. The effect of alkyl chain length on the intensity of a C-H bending mode is explored and interpreted in terms of conformer distribution. The relative intensity contribution of a methyl mode compared to the total C-H stretch intensity is shown to be linear in the number of terminal methyl groups in the alkane, and can be used to predict quantitative spectra a priori based on structure alone.
Zhang, Jianying; Chen, Gangling; Gong, Xuedong
2017-06-01
The quantitative structure-property relationship (QSPR) methodology was applied to describe and seek the relationship between the structures and energetic properties (and sensitivity) for some common energy compounds. An extended series of structural and energetic descriptors was obtained with density functional theory (DFT) B3LYP and semi-empirical PM3 approaches. Results indicate that QSPR model constructed using quantum descriptors can be applied to verify the confidence of calculation results compared with experimental data. It can be extended to predict the properties of similar compounds.
DFT modeling of chemistry on the Z machine
Mattsson, Thomas
2013-06-01
Density Functional Theory (DFT) has proven remarkably accurate in predicting properties of matter under shock compression for a wide-range of elements and compounds: from hydrogen to xenon via water. Materials where chemistry plays a role are of particular interest for many applications. For example the deep interiors of Neptune, Uranus, and hundreds of similar exoplanets are composed of molecular ices of carbon, hydrogen, oxygen, and nitrogen at pressures of several hundred GPa and temperatures of many thousand Kelvin. High-quality thermophysical experimental data and high-fidelity simulations including chemical reaction are necessary to constrain planetary models over a large range of conditions. As examples of where chemical reactions are important, and demonstration of the high fidelity possible for these both structurally and chemically complex systems, we will discuss shock- and re-shock of liquid carbon dioxide (CO2) in the range 100 to 800 GPa, shock compression of the hydrocarbon polymers polyethylene (PE) and poly(4-methyl-1-pentene) (PMP), and finally simulations of shock compression of glow discharge polymer (GDP) including the effects of doping with germanium. Experimental results from Sandia's Z machine have time and again validated the DFT simulations at extreme conditions and the combination of experiment and DFT provide reliable data for evaluating existing and constructing future wide-range equations of state models for molecular compounds like CO2 and polymers like PE, PMP, and GDP. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DFT-based QSAR and QSPR models of several cis-platinum complexes: solvent effect.
Sarmah, Pubalee; Deka, Ramesh C
2009-06-01
Cytotoxic activities of cis-platinum complexes against parental and resistant ovarian cancer cell lines were investigated by quantitative structure-activity relationship (QSAR) analysis using density functional theory (DFT) based descriptors. The calculated parameters were found to increase the predictability of each QSAR model with incorporation of solvent effects indicating its importance in studying biological activity. Given the importance of logarithmic n-octanol/water partition coefficient (log P(o/w)) in drug metabolism and cellular uptake, we modeled the log P(o/w) of 24 platinum complexes with different leaving and carrier ligands by the quantitative structure-property relationship (QSPR) analysis against five different concentrations of MeOH using DFT and molecular mechanics derived descriptors. The log P(o/w) values of an additional set of 20 platinum complexes were also modeled with the same descriptors. We investigated the predictability of the model by calculating log P(o/w) of four compounds in the test set and found their predicted values to be in good agreement with the experimental values. The QSPR analyses performed on 24 complexes, combining the training and test sets, also provided significant values for the statistical parameters. The solvent medium played an important role in QSPR analysis by increasing the internal predictive ability of the models.
Williams, Stephen D.; Johnson, Timothy J.; Sharpe, Steven W.; Yavelak, Veronica; Oates, R. P.; Brauer, Carolyn S.
2013-11-01
Recently recorded quantitative IR spectra of a variety of gas-phase alkanes are shown to have integrated intensities in both the C3H stretching and C3H bending regions that depend linearly on the molecular size, i.e. the number of C3H bonds. This result is well predicted from CH4 to C15H32 by density functional theory (DFT) computations of IR spectra using Becke's three parameter functional (B3LYP/6-31+G(d,p)). Using the experimental data, a simple model predicting the absolute IR band intensities of alkanes based only on structural formula is proposed: For the C3H stretching band envelope centered near 2930 cm-1 this is given by (km/mol) CH_str=(34±1)×CH-(41±23) where CH is number of C3H bonds in the alkane. The linearity is explained in terms of coordinated motion of methylene groups rather than the summed intensities of autonomous -CH2-units. The effect of alkyl chain length on the intensity of a C3H bending mode is explored and interpreted in terms of conformer distribution. The relative intensity contribution of a methyl mode compared to the total C3H stretch intensity is shown to be linear in the number of methyl groups in the alkane, and can be used to predict quantitative spectra a priori based on structure alone.
A Conceptual DFT Approach Towards Developing New QSTR Models
Chattaraj, P K; Giri, S; Mukherjee, S; Roy, D R; Subramanian, V; Van Damme, S
2007-01-01
Quantitative-structure-toxicity-relationship (QSTR) models are developed for predicting the toxicity (pIGC50) of 252 aliphatic compounds on Tetrahymena pyriformis. The single parameter models with a simple molecular descriptor, the number of atoms in the molecule, provide unbelievable results. Better QSTR models with two parameters result when global electrophilicity is used as the second descriptor. In order to tackle both charge- and frontier-controlled reactions the importance of the local electro (nucleo) philicities and atomic charges is also analyzed. Best possible three parameter QSTR models are prescribed.
Chavda, Bhavin R.; Gandhi, Sahaj A.; Dubey, Rahul P.; Patel, Urmila H.; Barot, Vijay M.
2016-05-01
The novel chalcone derivatives have widespread applications in material science and medicinal industries. The density functional theory (DFT) is used to optimized the molecular structure of the three chalcone derivatives (M-I, II, III). The observed discrepancies between the theoretical and experimental (X-ray data) results attributed to different environments of the molecules, the experimental values are of the molecule in solid state there by subjected to the intermolecular forces, like non-bonded hydrogen bond interactions, where as isolated state in gas phase for theoretical studies. The lattice energy of all the molecules have been calculated using PIXELC module in Coulomb -London -Pauli (CLP) package and is partitioned into corresponding coulombic, polarization, dispersion and repulsion contributions. Lattice energy data confirm and strengthen the finding of the X-ray results that the weak but significant intermolecular interactions like C-H…O, Π- Π and C-H… Π plays an important role in the stabilization of crystal packing.
Energy Technology Data Exchange (ETDEWEB)
Chavda, Bhavin R., E-mail: chavdabhavin9@gmail.com; Dubey, Rahul P.; Patel, Urmila H. [Department of Physics, Sardar Patel University, Vallabh Vidyanagar-388120, Gujarat (India); Gandhi, Sahaj A. [Bhavan’s Shri I.L. Pandya Arts-Science and Smt. J.M. shah Commerce College, Dakar, Anand -388001, Gujarat, Indian (India); Barot, Vijay M. [P. G. Center in Chemistry, Smt. S. M. Panchal Science College, Talod, Gujarat 383 215 (India)
2016-05-06
The novel chalcone derivatives have widespread applications in material science and medicinal industries. The density functional theory (DFT) is used to optimized the molecular structure of the three chalcone derivatives (M-I, II, III). The observed discrepancies between the theoretical and experimental (X-ray data) results attributed to different environments of the molecules, the experimental values are of the molecule in solid state there by subjected to the intermolecular forces, like non-bonded hydrogen bond interactions, where as isolated state in gas phase for theoretical studies. The lattice energy of all the molecules have been calculated using PIXELC module in Coulomb –London –Pauli (CLP) package and is partitioned into corresponding coulombic, polarization, dispersion and repulsion contributions. Lattice energy data confirm and strengthen the finding of the X-ray results that the weak but significant intermolecular interactions like C-H…O, Π- Π and C-H… Π plays an important role in the stabilization of crystal packing.
Directory of Open Access Journals (Sweden)
Navaratnarajah Kuganathan
2010-01-01
Full Text Available Model calculations are performed to predict the nature of interaction between SWNT and a tripeptide (Lys-Trp-Lys and to calculate the binding energies and charge transfer between these two species using density functional theory. DFT calculations indicate that the interaction is of a non covalent nature. Minimal charge transfer is observed between SWNT and Lys-Trp-Lys.
Compositional and Quantitative Model Checking
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand
2010-01-01
This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...
Patel, Kinjal D.; Patel, Urmila H.
2017-01-01
Sulfamonomethoxine, 4-Amino-N-(6-methoxy-4-pyrimidinyl) benzenesulfonamide (C11H12N4O3S), is investigated by single crystal X-ray diffraction technique. Pair of N-H⋯N and C-H⋯O intermolecular interactions along with π···π interaction are responsible for the stability of the molecular packing of the structure. In order to understand the nature of the interactions and their quantitative contributions towards the crystal packing, the 3D Hirshfeld surface and 2D fingerprint plot analysis are carried out. PIXEL calculations are performed to determine the lattice energies correspond to intermolecular interactions in the crystal structure. Ab initio quantum chemical calculations of sulfamonomethoxine (SMM) have been performed by B3LYP method, using 6-31G** basis set with the help of Schrodinger software. The computed geometrical parameters are in good agreement with the experimental data. The Mulliken charge distribution, calculated using B3LYP method to confirm the presence of electron acceptor and electron donor atoms, responsible for intermolecular hydrogen bond interactions hence the molecular stability.
Institute of Scientific and Technical Information of China (English)
HAN Xiang-Yun; ZHENG Qing
2007-01-01
Geometrical configurations of 16 substituted biphenyls were computed at the B3LYP/6-311G** level with Gaussian 98 program. Based on linear solvation energy theory, lgKow as well as the structural and thermodynamic parameters obtained at this level was taken as theoretical descriptors, and corresponding equation predicting the toxicity of Daphnia magna (-lgEC50)was thus obtained, in which three parameters were contained, i.e., n-octanol/water partition coefficients (lgKow), dipole moment of the molecules (μ) and entropy (S°). For this equation, R2 =0.9582, q2 = 0.8921 and SD = 0.102. The absolute t-scores of three variables are larger than the standard one in the confidence range of 95%, which confirms the creditability and stability of this model.
Evaluating London Dispersion Interactions in DFT: A Nonlocal Anisotropic Buckingham-Hirshfeld Model.
Krishtal, A; Geldof, D; Vanommeslaeghe, K; Alsenoy, C Van; Geerlings, P
2012-01-10
In this work, we present a novel model, referred to as BH-DFT-D, for the evaluation of London dispersion, with the purpose to correct the performance of local DFT exchange-correlation functionals for the description of van der Waals interactions. The new BH-DFT-D model combines the equations originally derived by Buckingham [Buckingham, A. D. Adv. Chem. Phys1967, 12, 107] with the definition of distributed multipole polarizability tensors within the Hirshfeld method [Hirshfeld, F.L. Theor. Chim. Acta1977, 44, 129], resulting in nonlocal, fully anisotropic expressions. Since no damping function has been introduced yet into the model, it is suitable in its present form for the evaluation of dispersion interactions in van der Waals dimers with no or negligible overlap. The new method is tested for an extended collection of van der Waals dimers against high-level data, where it is found to reproduce interaction energies at the BH-B3LYP-D/aug-cc-pVTZ level with a mean average error (MAE) of 0.20 kcal/mol. Next, development steps of the model will consist of adding a damping function, analytical gradients, and generalization to a supramolecular system.
Barbosa, Nuno Almeida; Grzeszczuk, Maria; Wieczorek, Robert
2015-01-15
First results of the application of the DFT computational approach to the reversible electrochemistry of polyaniline are presented. A tetrameric chain was used as the simplest model of the polyaniline polymer species. The system under theoretical investigation involved six tetramer species, two electrons, and two protons, taking part in 14 elementary reactions. Moreover, the tetramer species were interacting with two trihalogenoacetic acid molecules. Trifluoroacetic, trichloroacetic, and tribromoacetic acids were found to impact the redox transformation of polyaniline as shown by cyclic voltammetry. The theoretical approach was considered as a powerful tool for investigating the main factors of importance for the experimental behavior. The DFT method provided molecular structures, interaction energies, and equilibrium energies of all of the tetramer-acid complexes. Differences between the energies of the isolated tetramer species and their complexes with acids are discussed in terms of the elementary reactions, that is, ionization potentials and electron affinities, equilibrium constants, electrode potentials, and reorganization energies. The DFT results indicate a high impact of the acid on the reorganization energy of a particular elementary electron-transfer reaction. The ECEC oxidation path was predicted by the calculations. The model of the reacting system must be extended to octamer species and/or dimeric oligomer species to better approximate the real polymer situation.
Chromium-based rings within the DFT and Falicov-Kimball model approach
Brzostowski, B.; Lemański, R.; Ślusarski, T.; Tomecka, D.; Kamieniarz, G.
2013-04-01
We present a comprehensive study of electronic and magnetic properties of octometallic homo- and heteronuclear chromium-based molecular rings Cr7MF8(O2CH)16 (in short Cr7M, M = Cr, Cd and Ni) by the first-principle density functional theory (DFT) and pseudopotential ideas. Their radii are around 1 nm. For each Cr7M, the antiferromagnetic configuration corresponds to the ground state and the ferromagnetic (high spin HS) configuration to the highest energy state. Using the broken symmetry (BS) approach, the differences between the total energies of the HS configuration and all the nonequivalent low spin configurations with s = ±3/2 are calculated and exploited to extract the coupling parameters J between the magnetic ions. Magnetic moments are found to be well localised on the Cr and Ni centres, although the localisation of spin density on Ni is weaker. Having calculated the excess energies for an unprecedented number of configurations, a family of the Ising-like models with the nearest- and the next-nearest-neighbour interactions has been considered. For each Cr7M, the values of the interaction parameters found within the unprojected method are coherent, despite the overdetermination problem and demonstrate that the next-nearest-neighbour couplings are negligible. The DFT estimates of the nearest-neighbour coupling calculated are overestimated and the relation J Cr-Cr/ J Cr-Ni Kimball (FK) model is suggested. In our approach, the effective magnetic interactions between ions are generated by local (on-site) Hund couplings between the ions and itinerant electrons. We demonstrate that the BS state energies obtained within DFT for Cr7M can be successfully represented by the FK model with a unique set of parameters.
DFT molecular modeling and NMR conformational analysis of a new longipinenetriolone diester
Cerda-García-Rojas, Carlos M.; Guerra-Ramírez, Diana; Román-Marín, Luisa U.; Hernández-Hernández, Juan D.; Joseph-Nathan, Pedro
2006-05-01
The structure and conformational behavior of the new natural compound (4 R,5 S,7 S,8 R,9 S,10 R,11 R)-longipin-2-en-7,8,9-triol-1-one 7-angelate-9-isovalerate (1) isolated from Stevia eupatoria, were studied by molecular modeling and NMR spectroscopy. A Monte Carlo search followed by DFT calculations at the B3LYP/6-31G* level provided the theoretical conformations of the sesquiterpene framework, which were in full agreement with results derived from the 1H- 1H coupling constant analysis.
A DFT study of phenol adsorption on a low doping Mn-Ce composite oxide model
D´Alessandro, Oriana; Pintos, Delfina García; Juan, Alfredo; Irigoyen, Beatriz; Sambeth, Jorge
2015-12-01
Density functional theory calculations (DFT + U) were performed on a low doping Mn-Ce composite oxide prepared from experimental data, including X-ray diffraction (XRD) and temperature-programmed reduction (TPR). We considered a 12.5% Mn-doped CeO2 solid solution with fluorite-type structure, where Mn replaces Ce4+ leading to an oxygen-deficient bulk structure. Then, we modeled the adsorption of phenol on the bare Ce0.875Mn0.125O1.9375(1 1 1) surface. We also studied the effect of water adsorption and dissociation on phenol adsorption on this surface, and compared the predictions of DFT + U calculations with diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) measurements. The experimental results allowed us to both build a realistic model of the low doping Mn-Ce composite oxide and support the prediction that phenol is adsorbed as a phenoxy group with a tilt angle of about 70° with respect to the surface.
Fe doped TiO2-graphene nanostructures: synthesis, DFT modeling and photocatalysis
Farhangi, Nasrin; Ayissi, Serge; Charpentier, Paul A.
2014-08-01
In this work, Fe-doped TiO2 nanoparticles ranging from a 0.2 to 1 weight % were grown from the surface of graphene sheet templates containing -COOH functionalities using sol-gel chemistry in a green solvent, a mixture of water/ethanol. The assemblies were characterized by a variety of analytical techniques, with the coordination mechanism examined theoretically using the density functional theory (DFT). Scanning electron microscopy and transmission electron microscopy images showed excellent decoration of the Fe-doped TiO2 nanoparticles on the surface of the graphene sheets >5 nm in diameter. The surface area and optical properties of the Fe-doped photocatalysts were measured by BET, UV and PL spectrometry and compared to non-graphene and pure TiO2 analogs, showing a plateau at 0.6% Fe. Interactions between graphene and Fe-doped anatase TiO2 were also studied theoretically using the Vienna ab initio Simulation Package based on DFT. Our first-principles theoretical investigations validated the experimental findings, showing the strength in the physical and chemical adsorption between the graphene and Fe-doped TiO2. The resulting assemblies were tested for photodegradation under visible light using 17β-estradiol (E2) as a model compound, with all investigated catalysts showing significant enhancements in photocatalytic activity in the degradation of E2.
Directory of Open Access Journals (Sweden)
Alejandro Morales-Bayuelo
2013-01-01
Full Text Available Molecular quantum similarity descriptors and Density Functional Theory (DFT based reactivity descriptors were studied for a series of cholinesterase/monoamine oxidase inhibitors used for the Alzheimer's disease treatment (AD. This theoretical study is expected to shed some light onto some molecular aspects that could contribute to the knowledge of the molecular mechanics behind interactions of these molecules with acetylcholinesterase (AChE and butyrylcholinesterase (BuChE, as well as with monoamine oxidase (MAO A and B. The Topogeometrical Superposition Algorithm to handle flexible molecules (TGSA-Flex alignment method was used to solve the problem of the relative orientation in the quantum similarity (QS field. Using the molecular quantum similarity (MQS field and reactivity descriptors supported in the DFT was possible the quantification of the steric and electrostatic effects through of the Coulomb and Overlap quantitative convergence scales (alpha and beta. In addition, an analysis of reactivity indexes is development, using global and local descriptors, identifying the binding sites and selectivity in the (cholinesterase/monoamine oxidase inhibitors, understanding the retrodonor process, and showing new insight for drugs design in a disease of difficult control as Alzheimer.
Watkins, Marquita; Sizochenko, Natalia; Moore, Quentarius; Golebiowski, Marek; Leszczynska, Danuta; Leszczynski, Jerzy
2017-02-01
The presence of chlorophenols in drinking water can be hazardous to human health. Understanding the mechanisms of adsorption under specific experimental conditions would be beneficial when developing methods to remove toxic substances from drinking water during water treatment in order to limit human exposure to these contaminants. In this study, we investigated the sorption of chlorophenols on multi-walled carbon nanotubes using a density functional theory (DFT) approach. This was applied to study selected interactions between six solvents, five types of nanotubes, and six chlorophenols. Experimental data were used to construct structure-adsorption relationship (SAR) models that describe the recovery process. Specific interactions between solvents and chlorophenols were taken into account in the calculations by using novel specific mixture descriptors.
The mathematics of cancer: integrating quantitative models.
Altrock, Philipp M; Liu, Lin L; Michor, Franziska
2015-12-01
Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.
Self-interaction error in DFT-based modelling of ionic liquids.
Lage-Estebanez, Isabel; Ruzanov, Anton; García de la Vega, José M; Fedorov, Maxim V; Ivaništšev, Vladislav B
2016-01-21
The modern computer simulations of potential green solvents of the future, involving the room temperature ionic liquids, heavily rely on density functional theory (DFT). In order to verify the appropriateness of the common DFT methods, we have investigated the effect of the self-interaction error (SIE) on the results of DFT calculations for 24 ionic pairs and 48 ionic associates. The magnitude of the SIE is up to 40 kJ mol(-1) depending on the anion choice. Most strongly the SIE influences the calculation results of ionic associates that contain halide anions. For these associates, the range-separated density functionals suppress the SIE; for other cases, the revPBE density functional with dispersion correction and triple-ζ Slater-type basis is suitable for computationally inexpensive and reasonably accurate DFT calculations.
Mason, Ryan; Si, Meng; Li, Jixiao; Huffman, J. Alex; McCluskey, Christina; Levin, Ezra; Irish, Victoria; Chou, Cédric; Hill, Thomas; Ladino, Luis; Yakobi, Jacqueline; Schiller, Corinne; Abbatt, Jon; DeMott, Paul; Bertram, Allan
2014-05-01
Ice formation within a cloud system can significantly modify its lifetime and radiative forcing. Many current instruments for measuring atmospheric concentrations of ice nuclei (IN) are not capable of providing size-resolved information. Such knowledge is useful in identifying the sources of IN and predicting their transport in the atmosphere. Furthermore, those that use size-discrimination to identify IN typically exclude particles with an aerodynamic diameter greater than 2.5 μm from analysis. Several studies have indicated this may be an important size regime for IN, particularly with those activating at warmer temperatures. The recently developed Micro-Orifice Uniform Deposit Impactor-droplet freezing technique (MOUDI-DFT) addresses these limitations through combining sample collection by a model of cascade impactor with an established immersion freezing apparatus. Here we present a characterization of the MOUDI-DFT and the development of a modified technique which address experimental uncertainties arising from sample deposit inhomogeneity and the droplet freezing method. An intercomparison with a continuous-flow diffusion chamber (CFDC) was performed. We also show preliminary results from a campaign undertaken in a remote coastal region of western Canada. Correlations between atmospheric IN concentrations and the abundance of suspended submicron and supermicron particles, biological aerosols, carbonaceous aerosols, and prevailing meteorological conditions were investigated.
Bobovská, Adela; Tvaroška, Igor; Kóňa, Juraj
2016-05-01
Human Golgi α-mannosidase II (GMII), a zinc ion co-factor dependent glycoside hydrolase (E.C.3.2.1.114), is a pharmaceutical target for the design of inhibitors with anti-cancer activity. The discovery of an effective inhibitor is complicated by the fact that all known potent inhibitors of GMII are involved in unwanted co-inhibition with lysosomal α-mannosidase (LMan, E.C.3.2.1.24), a relative to GMII. Routine empirical QSAR models for both GMII and LMan did not work with a required accuracy. Therefore, we have developed a fast computational protocol to build predictive models combining interaction energy descriptors from an empirical docking scoring function (Glide-Schrödinger), Linear Interaction Energy (LIE) method, and quantum mechanical density functional theory (QM-DFT) calculations. The QSAR models were built and validated with a library of structurally diverse GMII and LMan inhibitors and non-active compounds. A critical role of QM-DFT descriptors for the more accurate prediction abilities of the models is demonstrated. The predictive ability of the models was significantly improved when going from the empirical docking scoring function to mixed empirical-QM-DFT QSAR models (Q(2)=0.78-0.86 when cross-validation procedures were carried out; and R(2)=0.81-0.83 for a testing set). The average error for the predicted ΔGbind decreased to 0.8-1.1kcalmol(-1). Also, 76-80% of non-active compounds were successfully filtered out from GMII and LMan inhibitors. The QSAR models with the fragmented QM-DFT descriptors may find a useful application in structure-based drug design where pure empirical and force field methods reached their limits and where quantum mechanics effects are critical for ligand-receptor interactions. The optimized models will apply in lead optimization processes for GMII drug developments.
DFT calculations of EPR parameters for copper(II)-exchanged zeolites using cluster models.
Ames, William M; Larsen, Sarah C
2010-01-14
The coordination environment of Cu(II) in hydrated copper-exchanged zeolites was explored through the use of density functional theory (DFT) calculations of EPR parameters. Extensive experimental EPR data are available in the literature for hydrated copper-exchanged zeolites. The copper complex in hydrated copper-exchanged zeolites was previously proposed to be [Cu(H(2)O)(5)OH](+) based on empirical trends in tetragonal model complex EPR data. In this study, calculated EPR parameters for the previously proposed copper complex, [Cu(H(2)O)(5)OH](+), were compared to model complexes in which Cu(II) was coordinated to small silicate or aluminosilicate clusters as a first approximation of the impact of the zeolitic environment on the copper complex. Interpretation of the results suggests that Cu(II) is coordinated or closely associated with framework oxygen atoms within the zeolite structure. Additionally, it is proposed that the EPR parameters are dependent on the Si/Al ratio of the parent zeolite.
Quantitative structure - mesothelioma potency model ...
Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar
Compositional and Quantitative Model Checking
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand
2010-01-01
on the existence of a quotient construction, allowing a property phi of a parallel system phi/A to be transformed into a sufficient and necessary quotient-property yolA to be satisfied by the component 13. Given a model checking problem involving a network Pi I and a property yo, the method gradually move (by...
What have we learned from modeling carbohydrates using cutting edge (DFT) computational tools?
Over the last decade there have been vast improvements in computer speed and also in the development of reliable density functional methods (DFT). These improvements have allowed a rigorous and systematic study of carbohydrates to be carried out. Recent publications from this laboratory, in the ar...
A new era of carbohydrate modeling using cutting edge DFT methods
During the last several years we have seen vast improvements in the quality of carbohydrate simulations, including Density Functional Theory (DFT) structure determination and ab initio molecular dynamics studies of low energy conformers of mono-, di-, and larger saccharides at a level of theory that...
Quantitative Modeling of Earth Surface Processes
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...
Quantitative system validation in model driven design
DEFF Research Database (Denmark)
Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois;
2010-01-01
The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...
Recent trends in social systems quantitative theories and quantitative models
Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz
2017-01-01
The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...
Shukla, Anuradha; Khan, Eram; Tandon, Poonam; Sinha, Kirti
2017-03-01
Ampicillin is a β-lactam antibiotic that is active against both gram-positive and gram-negative bacteria and is widely used for the treatment of infections. In this work, molecular properties of ampicillin are calculated on the basis of calculations on its dimeric and tetrameric models using DFT/B3LYP/6-311G(d,p). HOMO-LUMO energy gap shows that chemical reactivity of tetrameric model of ampicillin is higher than the dimeric and monomeric model of ampicillin. To get a better understanding of intra and intermolecular bonding and interactions among bonds, NBO analysis is carried out with tetrameric model of ampicillin, and is further finalized with an 'quantum theory of atoms-in-molecules' (QTAIM) analysis. The binding energy of dimeric model of ampicillin is calculated as -26.84 kcal/mol and -29.34 kcal/mol using AIM and DFT calculations respectively. The global electrophilicity index (ω = 2.8118 eV) of tetrameric model of ampicillin shows that this behaves as a strong electrophile in comparison to dimeric and monomeric model of ampicillin. The FT-Raman and FT-IR spectra were recorded in the solid phase, and interpreted in terms of potential energy distribution analysis. A collective theoretical and experimental vibrational analysis approves the presence of hydrogen bonds in the ampicillin molecule.
Quantitative model validation techniques: new insights
Ling, You
2012-01-01
This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...
Building a Database for a Quantitative Model
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
Directory of Open Access Journals (Sweden)
Mami Mutoh
2015-12-01
Full Text Available The ternary interaction system composed of fluorinated ethylene carbonate, denoted by EC(F, lithium ion (Li+ and a model of nano-structured graphene has been investigated by means of the density functional theory (DFT method. For comparison, fluorinated vinylene carbonate, denoted by VC(F, was also used. The model of graphene consisting of 14 benzene rings was examined as a nano-structured graphene. The effects of fluorine substitution on the electronic state and binding energy were investigated from a theoretical point of view. It was found that both EC(F and VC(F bind to a hexagonal site corresponding to the central benzene ring of the model of the graphene surface. The binding energies of Li+EC(F and Li+VC(F to the model of graphene decreased with increasing number of fluorine atoms (n.
Dry (CO2) reforming of methane over Pt catalysts studied by DFT and kinetic modeling
Niu, Juntian; Du, Xuesen; Ran, Jingyu; Wang, Ruirui
2016-07-01
Dry reforming of methane (DRM) is a well-studied reaction that is of both scientific and industrial importance. In order to design catalysts that minimize the deactivation and improve the selectivity and activity for a high H2/CO yield, it is necessary to understand the elementary reaction steps involved in activation and conversion of CO2 and CH4. In our present work, a microkinetic model based on density functional theory (DFT) calculations is applied to explore the reaction mechanism for methane dry reforming on Pt catalysts. The adsorption energies of the reactants, intermediates and products, and the activation barriers for the elementary reactions involved in the DRM process are calculated over the Pt(1 1 1) surface. In the process of CH4 direct dissociation, the kinetic results show that CH dissociative adsorption on Pt(1 1 1) surface is the rate-determining step. CH appears to be the most abundant species on the Pt(1 1 1) surface, suggesting that carbon deposition is not easy to form in CH4 dehydrogenation on Pt(1 1 1) surface. In the process of CO2 activation, three possible reaction pathways are considered to contribute to the CO2 decomposition: (I) CO2* + * → CO* + O*; (II) CO2* + H* → COOH* + * → CO* + OH*; (III) CO2* + H* → mono-HCOO* + * → bi-HCOO* + * [CO2* + H* → bi-HCOO* + *] → CHO* + O*. Path I requires process to overcome the activation barrier of 1.809 eV and the forward reaction is calculated to be strongly endothermic by 1.430 eV. In addition, the kinetic results also indicate this process is not easy to proceed on Pt(1 1 1) surface. While the CO2 activation by H adsorbed over the catalyst surface to form COOH intermediate (Path II) is much easier to be carried out with the lower activation barrier of 0.746 eV. The Csbnd O bond scission is the rate-determining step along this pathway and the process needs to overcome the activation barrier of 1.522 eV. Path III reveals the CO2 activation through H adsorbed over the catalyst
Quantitative Models and Analysis for Reactive Systems
DEFF Research Database (Denmark)
Thrane, Claus
phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...... by the environment in which they are embedded. This thesis studies the semantics and properties of a model-based framework for re- active systems, in which models and specifications are assumed to contain quantifiable information, such as references to time or energy. Our goal is to develop a theory of approximation......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...
Gogoi, Dhrubajyoti; Chaliha, Amrita Kashyap; Sarma, Diganta; Kakoti, Bibhuti Bhusan; Buragohain, Alak Kumar
2017-01-01
Ligand and structure-based pharmacophore models were used to identify the important chemical features of butyrylcholinesterase (BChE) inhibitors. A training set of 16 known structurally diverse compounds with a wide range of inhibitory activity against BChE was used to develop a quantitative ligand-based pharmacophore (Hypo1) model to identify novel BChE inhibitors in virtual screening databases. A structure-based pharmacophore hypothesis (Phar1) was also developed with the ligand-binding site of BChE in consideration. Further, the models were validated using test set, Fisher's Randomization and Leave-one-out validation methods. Well-validated pharmacophore hypotheses were further used as 3D queries in virtual screening and 430 compounds were finally selected for molecular docking analysis. Subsequently, ADMET, DFT and chemical similarity search were employed to narrow down on seven compounds as potential drug candidates. Analogues of the best hit were further developed through a bioisosterism-guided approach to further generate a library of potential BChE inhibitors.
Quantitative Models and Analysis for Reactive Systems
DEFF Research Database (Denmark)
Thrane, Claus
phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided......, allowing verification procedures to quantify judgements, on how suitable a model is for a given specification — hence mitigating the usual harsh distinction between satisfactory and non-satisfactory system designs. This information, among other things, allows us to evaluate the robustness of our framework......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...
DFT MODELING OF BENZOYL PEROXIDE ADSORPTION ON α-Cr2O3 (0001) SURFACE
Maldonado, Frank; Stashans, Arvids
2016-04-01
Density functional theory (DFT) within the generalized gradient approximation (GGA) has been used to investigate possible adsorption configurations of benzoyl peroxide (BPO) molecule on the chromium oxide (α-Cr2O3) (0001) surface. Two configurations are found to lead to the molecular adsorption with corresponding adsorption energies being equal to -0.16 and -0.48eV, respectively. Our work describes in detail atomic displacements for both crystalline surface and adsorbate as well as discusses electronic and magnetic properties of the system. The most favorable adsorption case is found when the chemical bond between one of the molecular oxygens and one of the surface Cr atoms has been formed.
DFT Study on the Antioxidant Activity of a Modeled p-Terphenyl Derivative
Institute of Scientific and Technical Information of China (English)
WANG Chuan-Ming; PAN Xu-Lin
2012-01-01
Relationships between the structure characteristics of natural p-terphenyl com- pounds isolated from three edible mushrooms （Thelephora ganbajun, Thelephora aeronautical, and Boletopsis grisea） indigenous to China and their mechanism of antioxidant activity were studied. Geometry structures of terphenyl molecule and four corresponding radicals, bond dissociation energy （BDE）, frontier orbitals （HOMO and LUMO） and single electron density were calculated using DFT methods （B3LYP/6-311G＊＊）. The computational results which are consistent with the experimental data well show that terphenyl molecule scavenges DPPH radical by hydrogen abstract mechanism and the high antioxidant activity depends on the substitution position of hydroxyls. Two active 7-, 8-hydroxyls facilitate the hydrogen abstraction due to the intramolecular hydrogen bond and the resonance effect makes 4-hydroxyl radical more stable.
Quantitative bioluminescence imaging of mouse tumor models.
Tseng, Jen-Chieh; Kung, Andrew L
2015-01-05
Bioluminescence imaging (BLI) has become an essential technique for preclinical evaluation of anticancer therapeutics and provides sensitive and quantitative measurements of tumor burden in experimental cancer models. For light generation, a vector encoding firefly luciferase is introduced into human cancer cells that are grown as tumor xenografts in immunocompromised hosts, and the enzyme substrate luciferin is injected into the host. Alternatively, the reporter gene can be expressed in genetically engineered mouse models to determine the onset and progression of disease. In addition to expression of an ectopic luciferase enzyme, bioluminescence requires oxygen and ATP, thus only viable luciferase-expressing cells or tissues are capable of producing bioluminescence signals. Here, we summarize a BLI protocol that takes advantage of advances in hardware, especially the cooled charge-coupled device camera, to enable detection of bioluminescence in living animals with high sensitivity and a large dynamic range.
Quantitative assessment model for gastric cancer screening
Institute of Scientific and Technical Information of China (English)
Kun Chen; Wei-Ping Yu; Liang Song; Yi-Min Zhu
2005-01-01
AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer.METHODS: A case control study was carried on in 66patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food,etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD).RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively.According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%.Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P＞0.05).CONCLUSION: The validity of this method is satisfactory.It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer.
Global Quantitative Modeling of Chromatin Factor Interactions
Zhou, Jian; Troyanskaya, Olga G.
2014-01-01
Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896
Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology
Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.
2013-01-01
This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus
The quantitative modelling of human spatial habitability
Wise, J. A.
1985-01-01
A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).
Toward quantitative modeling of silicon phononic thermocrystals
Energy Technology Data Exchange (ETDEWEB)
Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)
2015-03-16
The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.
Lee, Taehun; Soon, Aloysius
2012-02-01
For high-temperature applications, the chemical stability, as well as the mechanical integrity of the oxide material used is of utmost importance. Solving these problems demands a thorough and fundamental understanding of their thermal-elastic properties. In this work, we report density-functional theory (DFT) calculations to investigate the influence of the xc functional on specific thermal-elastic properties of some common oxides CeO2, Cu2O, and MgO. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation due to Perdew, Burke, and Ernzerhof (GGA-PBE), as well as a recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). In addition, we will also report DFT+U results where we introduce a Hubbard U term to the Cu 3d and the Ce 4f states. Upon obtaining the DFT total energies, we then couple this to a volume-dependent Debye-Gr"uneisen model [1] to determine the thermodynamic quantities of these oxides at arbitrary pressures and temperatures. We find an explicit description of the strong correlation (e.g. via the DFT+U approach and using HSE06) is necessary to have a good agreement with experimental values. [1] A. Otero-de-la-Roza, D. Abbasi-P'erez et al. Com. Phys. Com. 182 (2011) 2232
CO2 adsorption-assisted CH4 desorption on carbon models of coal surface: A DFT study
Xu, He; Chu, Wei; Huang, Xia; Sun, Wenjing; Jiang, Chengfa; Liu, Zhongqing
2016-07-01
Injection of CO2 into coal is known to improve the yields of coal-bed methane gas. However, the technology of CO2 injection-enhanced coal-bed methane (CO2-ECBM) recovery is still in its infancy with an unclear mechanism. Density functional theory (DFT) calculations were performed to elucidate the mechanism of CO2 adsorption-assisted CH4 desorption (AAD). To simulate coal surfaces, different six-ring aromatic clusters (2 × 2, 3 × 3, 4 × 4, 5 × 5, 6 × 6, and 7 × 7) were used as simplified graphene (Gr) carbon models. The adsorption and desorption of CH4 and/or CO2 on these carbon models were assessed. The results showed that a six-ring aromatic cluster model (4 × 4) can simulate the coal surface with limited approximation. The adsorption of CO2 onto these carbon models was more stable than that in the case of CH4. Further, the adsorption energies of single CH4 and CO2 in the more stable site were -15.58 and -18.16 kJ/mol, respectively. When two molecules (CO2 and CH4) interact with the surface, CO2 compels CH4 to adsorb onto the less stable site, with a resulting significant decrease in the adsorption energy of CH4 onto the surface of the carbon model with pre-adsorbed CO2. The Mulliken charges and electrostatic potentials of CH4 and CO2 adsorbed onto the surface of the carbon model were compared to determine their respective adsorption activities and changes. At the molecular level, our results showed that the adsorption of the injected CO2 promoted the desorption of CH4, the underlying mechanism of CO2-ECBM.
Modeling conflict : research methods, quantitative modeling, and lessons learned.
Energy Technology Data Exchange (ETDEWEB)
Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.
2004-09-01
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
Hansen, Niels; Kerber, Torsten; Sauer, Joachim; Bell, Alexis T; Keil, Frerich J
2010-08-25
The alkylation of benzene by ethene over H-ZSM-5 is analyzed by means of a hybrid MP2:DFT scheme. Density functional calculations applying periodic boundary conditions (PBE functional) are combined with MP2 energy calculations on a series of cluster models of increasing size which allows extrapolation to the periodic MP2 limit. Basis set truncation errors are estimated by extrapolation of the MP2 energy to the complete basis set limit. Contributions from higher-order correlation effects are accounted for by CCSD(T) coupled cluster calculations. The sum of all contributions provides the "final estimates" for adsorption energies and energy barriers. Dispersion contributes significantly to the potential energy surface. As a result, the MP2:DFT potential energy profile is shifted downward compared to the PBE profile. More importantly, this shift is not the same for reactants and transition structures due to different self-interaction correction errors. The final enthalpies for ethene, benzene, and ethylbenzene adsorption on the Brønsted acid site at 298 K are -46, -78, and -110 kJ/mol, respectively. The intrinsic enthalpy barriers at 653 K are 117 and 119/94 kJ/mol for the one- and two-step alkylation, respectively. Intrinsic rate coefficients calculated by means of transition state theory are converted to apparent Arrhenius parameters by means of the multicomponent adsorption equilibrium. The simulated apparent activation energy (66 kJ/mol) agrees with experimental data (58-76 kJ/mol) within the uncertainty limit of the calculations. Adsorption energies obtained by adding a damped dispersion term to the PBE energies (PBE+D), agree within +/-7 kJ/mol, with the "final estimates", except for physisorption (pi-complex formation) and chemisorption of ethene (ethoxide formation) for which the PBE+D energies are 12.4 and 26.0 kJ/mol, respectively larger than the "final estimates". For intrinsic energy barriers, the PBE+D approach does not improve pure PBE results.
Qualitative vs. quantitative software process simulation modelling: conversion and comparison
Zhang, He; Kitchenham, Barbara; Jeffery, Ross
2009-01-01
peer-reviewed Software Process Simulation Modeling (SPSM) research has increased in the past two decades. However, most of these models are quantitative, which require detailed understanding and accurate measurement. As the continuous work to our previous studies in qualitative modeling of software process, this paper aims to investigate the structure equivalence and model conversion between quantitative and qualitative process modeling, and to compare the characteristics and performance o...
Gorgannezhad, Lena; Dehghan, Gholamreza; Ebrahimipour, S. Yousef; Naseri, Abdolhossein; Nazhad Dolatabadi, Jafar Ezzati
2016-04-01
The complex formation between curcumin (Cur) and Manganese (II) chloride tetrahydrate (MnCl2.4H2O) was studied by UV-Vis and IR spectroscopy. Spectroscopic data suggest that Cur can chelate Manganese cations. A simple multi-wavelength model-based method was used to define stability constant for complexation reaction regardless of the spectra overlapping of components. Also, pure spectra and concentration profiles of all components were extracted using this method. Density functional theory (DFT) was also used to view insight into complexation mechanism. Antioxidant activity of Cur and Cur-Mn(II) complex was evaluated using 1,1-diphenyl-2-picrylhydrazyl (DPPH) scavenging method. Bond dissociation energy (BDE), the highest occupied molecular orbital (HOMO), lowest unoccupied molecular orbital (LUMO) and Molecular electrostatic potential (MEP) of Cur and the complex also were calculated at PW91/TZ2P level of theory using ADF 2009.01 package. The experimental results show that Cur has a higher DPPH radical scavenging activity than Cur-Mn(II). This observation is theoretically justified by means of lower BDE and higher HOMO and LUMO energy values of Cur ligand as compared with those of Cur-Mn(II) complex.
Quantitative modelling of the biomechanics of the avian syrinx
Elemans, C.P.H.; Larsen, O.N.; Hoffmann, M.R.; Leeuwen, van J.L.
2003-01-01
We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts
Quantitative modelling of the biomechanics of the avian syrinx
DEFF Research Database (Denmark)
Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.
2003-01-01
We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...
Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization
Directory of Open Access Journals (Sweden)
Xuefeng Yan
2013-01-01
Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.
Grahnen, Johan A; Amunson, Krista E; Kubelka, Jan
2010-10-14
Infrared (IR) amide I' spectra are widely used for investigations of the structural properties of proteins in aqueous solution. For analysis of the experimental data, it is necessary to separate the spectral features due to the backbone conformation from those arising from other factors, in particular the interaction with solvent. We investigate the effects of solvation on amide I' spectra for a small 40-residue helix-turn-helix protein by theoretical simulations based on density functional theory (DFT). The vibrational force fields and intensity parameters for the protein amide backbone are constructed by transfer from smaller heptaamide fragments; the side chains are neglected in the DFT calculations. Solvent is modeled at two different levels: first as explicit water hydrogen bonded to the surface amide groups, treated at the same DFT level, and, second, using the electrostatic map approach combined with molecular dynamics (MD) simulation. Motional narrowing of the spectral band shapes due to averaging over the fast solvent fluctuation is introduced by use of the time-averaging approximation (TAA). The simulations are compared with the experimental amide I', including two (13)C isotopically edited spectra, corrected for the side-chain signals. Both solvent models are consistent with the asymmetric experimental band shape, which arises from the differential solvation of the amide backbone. However, the effects of (13)C isotopic labeling are best captured by the gas-phase calculations. The limitations of the solvent models and implications for the theoretical simulations of protein amide vibrational spectra are discussed.
Towards a generalized iso-density continuum model for molecular solvents in plane-wave DFT
Gunceler, Deniz; Arias, T. A.
2017-01-01
Implicit electron-density solvation models offer a computationally efficient solution to the problem of calculating thermodynamic quantities of solvated systems from first-principles quantum mechanics. However, despite much recent interest in such models, to date the applicability of such models in the plane-wave context to non-aqueous solvents has been limited because the determination of the model parameters requires fitting to a large database of experimental solvation energies for each new solvent considered. This work presents a simple approach to quickly find approximations to the non-electrostatic contributions to the solvation energy, allowing for development of new iso-density models for a large class of protic and aprotic solvents from only simple, single-molecule ab initio calculations and readily available bulk thermodynamic data. Finally, to illustrate the capabilities of the resulting theory, we also calculate the surface solvation energies of crystalline LiF in various different non-aqueous solvents, and discuss the observed trends and their relevance to lithium battery technology.
DFT modeling of adsorption of formaldehyde and methanediol anion on the (111) face of IB metals
Starodubov, S. S.; Nechaev, I. V.; Vvedenskii, A. V.
2016-01-01
Gas-phase adsorption of formaldehyde and gas- and liquid-phase adsorption of the methanediol anion on the (111) face of copper, silver, and gold was modeled in terms of the density functional theory and the cluster model of the metal single-crystal surface. In the gas phase, formaldehyde was found to be physically adsorbed on the metals, while the methanediol anion was found to be chemisorbed. It exists on the surface in two different stable states. In aqueous solution, the H3CO 2 - anion can spontaneously dissociate into the formate ion and two hydrogen atoms.
Noble gas encapsulation into carbon nanotubes: Predictions from analytical model and DFT studies
Balasubramani, Sree Ganesh; Singh, Devendra; Swathi, R. S.
2014-11-01
The energetics for the interaction of the noble gas atoms with the carbon nanotubes (CNTs) are investigated using an analytical model and density functional theory calculations. Encapsulation of the noble gas atoms, He, Ne, Ar, Kr, and Xe into CNTs of various chiralities is studied in detail using an analytical model, developed earlier by Hill and co-workers. The constrained motion of the noble gas atoms along the axes of the CNTs as well as the off-axis motion are discussed. Analyses of the forces, interaction energies, acceptance and suction energies for the encapsulation enable us to predict the optimal CNTs that can encapsulate each of the noble gas atoms. We find that CNTs of radii 2.98 - 4.20 Å (chiral indices, (5,4), (6,4), (9,1), (6,6), and (9,3)) can efficiently encapsulate the He, Ne, Ar, Kr, and Xe atoms, respectively. Endohedral adsorption of all the noble gas atoms is preferred over exohedral adsorption on various CNTs. The results obtained using the analytical model are subsequently compared with the calculations performed with the dispersion-including density functional theory at the M06 - 2X level using a triple-zeta basis set and good qualitative agreement is found. The analytical model is however found to be computationally cheap as the equations can be numerically programmed and the results obtained in comparatively very less time.
Modeling the active sites of Co-promoted MoS2 particles by DFT
DEFF Research Database (Denmark)
Šaric, Manuel; Rossmeisl, Jan; Moses, Poul Georg
2017-01-01
The atomic-scale structure of the Co-promoted MoS2 catalyst (CoMoS), used for hydrodesulfurization and as a potential replacement for platinum in the acidic hydrogen evolution reaction has been analyzed by modeling its sites using density functional theory and applying thermochemical corrections...
Shishlov, N. M.; Akhmetzyanov, Sh S.; Khursan, S. L.
2017-02-01
Experimental IR spectra of crystalline dried and non-dried potassium diphenylsulfophthalide (TAC-K) as a model compound for polymeric salts are presented. DFT analysis (B3LYP/6-311G(d,p)) of the structure and IR spectra of a series of compounds similar in structure to TAC-K as well as their dimers indicates that the sulfonate group environment strongly affects the positions of absorption bands (ABs) of vibrations of Ssbnd O bonds and demonstrates that information on the exact structure of ion clusters is needed for reliable and unambiguous assignment of the ABs in experimental IR spectra of real sulfonate ion containing systems to particular vibrational modes. Various ways of metal ion coordination with sulfonate ion, as well as their reflection in IR spectra of model compounds, are considered and discussed. Using TAC-K as an example, the effect of an intramolecular hydrogen bond on the vibrational modes of sulfonate group and hydroxy group is considered. The effect of ion aggregation on the shape of the IR spectrum of TAC-K is analyzed for an energetically favorable dimer of this salt as an example. Based on a combination of calculated, literature and reference data, a number of ABs in the IR spectra of TAC-K have been tentatively assigned. In particular, the bands in the region of 3230-3180 cm-1 have been assigned to ν(Osbnd H); those at 1240-1160 cm-1, to νas(SO3-); the AB at 1080 cm-1, to νs(SO3-); that at 616 cm-1, to δ(oop)s(SO3-); and that at 570 cm-1, to δ(ip)as(SO3-).
Ivanov, Petko
2016-03-01
The balances of interactions were studied by computational methods in the translational isomers of a solvent switchable fullerene-stoppered [2]rotaxane (1) manifesting unexpected behavior, namely that due to favorable dispersion interactions the fullerene stopper becomes the second station upon change of the solvent. For comparison, another system, a pH switchable molecular shuttle (2), was also examined as an example of prevailing electrostatic interactions. Tested for 1 were five global hybrid Generalized Gradient Approximation functionals (B3LYP, B3LYP-D3, B3LYP-D3BJ, PBEh1PBE and APFD), one long-range corrected, range-separated functional with D2 empirical dispersion correction, ωB97XD, the Zhao-Truhlar's hybrid meta-GGA functional M06 with double the amount of nonlocal exchange (2X), and a pure functional, B97, with the Grimme's D3BJ dispersion (B97D3). The molecular mechanics method qualitatively correctly reproduced the behavior of the [2]rotaxanes, whereas the DFT models, except for M06-2X to some extent, failed in the case of significant dispersion interactions with participation of the fulleropyrrolidine stopper (rotaxane 1). Unexpectedly, the benzylic amide macrocycle tends to adopt preferentially 'boat'-like conformation in most of the cases. Four hydrogen bonds interconnect the axle with the wheel for the translational isomer with the macroring at the succinamide station (station II), whereas the number of hydrogen bonds vary for the isomer with the macroring at the fulleropyrrolidine stopper (station I) depending of the computational model used. The B3LYP and the PBEh1PBE results show strong preference of station II in the gas phase and in the model solvent DMSO. After including empirical dispersion correction, the translational isomer with the macroring at station I has the lower energy with B3LYP, both in the gas phase and in DMSO. The same result, but with higher preference of station I, was estimated with APFD, ωB97XD and B97D3. Only M06-2X
Quantitative models for sustainable supply chain management
DEFF Research Database (Denmark)
Brandenburg, M.; Govindan, Kannan; Sarkis, J.
2014-01-01
Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...
Quantitative magnetospheric models: results and perspectives.
Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team
Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed
Hazard Response Modeling Uncertainty (A Quantitative Method)
1988-10-01
ersio 114-11aiaiI I I I II L I ATINI Iri Ig IN Ig - ISI I I s InWLS I I I II I I I IWILLa RguOSmI IT$ INDS In s list INDIN I Im Ad inla o o ILLS I...OesotASII II I I I" GASau ATI im IVS ES Igo Igo INC 9 -U TIg IN ImS. I IgoIDI II i t I I ol f i isI I I I I I * WOOL ETY tGMIM (SU I YESMI jWM# GUSA imp I...is the concentration predicted by some component or model.P The variance of C /C is calculated and defined as var(Model I), where Modelo p I could be
The quantitative modelling of human spatial habitability
Wise, James A.
1988-01-01
A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.
A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.
Bruneton, Eric
2016-10-27
We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.
A Solved Model to Show Insufficiency of Quantitative Adiabatic Condition
Institute of Scientific and Technical Information of China (English)
LIU Long-Jiang; LIU Yu-Zhen; TONG Dian-Min
2009-01-01
The adiabatic theorem is a useful tool in processing quantum systems slowly evolving,but its practical application depends on the quantitative condition expressed by Hamiltonian's eigenvalues and eigenstates,which is usually taken as a sufficient condition.Recently,the sumciency of the condition was questioned,and several counterex amples have been reported.Here we present a new solved model to show the insufficiency of the traditional quantitative adiabatic condition.
Quantitative sociodynamics stochastic methods and models of social interaction processes
Helbing, Dirk
1995-01-01
Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...
Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes
Helbing, Dirk
2010-01-01
This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...
Modeling quantitative phase image formation under tilted illuminations.
Bon, Pierre; Wattellier, Benoit; Monneret, Serge
2012-05-15
A generalized product-of-convolution model for simulation of quantitative phase microscopy of thick heterogeneous specimen under tilted plane-wave illumination is presented. Actual simulations are checked against a much more time-consuming commercial finite-difference time-domain method. Then modeled data are compared with experimental measurements that were made with a quadriwave lateral shearing interferometer.
Generalized PSF modeling for optimized quantitation in PET imaging
Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman
2017-06-01
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF
Refining the quantitative pathway of the Pathways to Mathematics model.
Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda
2015-03-01
In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.
Basis set dependence using DFT/B3LYP calculations to model the Raman spectrum of thymine.
Bielecki, Jakub; Lipiec, Ewelina
2016-02-01
Raman spectroscopy (including surface enhanced Raman spectroscopy (SERS) and tip enhanced Raman spectroscopy (TERS)) is a highly promising experimental method for investigations of biomolecule damage induced by ionizing radiation. However, proper interpretation of changes in experimental spectra for complex systems is often difficult or impossible, thus Raman spectra calculations based on density functional theory (DFT) provide an invaluable tool as an additional layer of understanding of underlying processes. There are many works that address the problem of basis set dependence for energy and bond length consideration, nevertheless there is still lack of consistent research on basis set influence on Raman spectra intensities for biomolecules. This study fills this gap by investigating of the influence of basis set choice for the interpretation of Raman spectra of the thymine molecule calculated using the DFT/B3LYP framework and comparing these results with experimental spectra. Among 19 selected Pople's basis sets, the best agreement was achieved using 6-31[Formula: see text](d,p), 6-31[Formula: see text](d,p) and 6-11[Formula: see text]G(d,p) sets. Adding diffuse functions or polarized functions for small basis set or use of a medium or large basis set without diffuse or polarized functions is not sufficient to reproduce Raman intensities correctly. The introduction of the diffuse functions ([Formula: see text]) on hydrogen atoms is not necessary for gas phase calculations. This work serves as a benchmark for further research on the interaction of ionizing radiation with DNA molecules by means of ab initio calculations and Raman spectroscopy. Moreover, this work provides a set of new scaling factors for Raman spectra calculation in the framework of DFT/B3LYP method.
Dry (CO{sub 2}) reforming of methane over Pt catalysts studied by DFT and kinetic modeling
Energy Technology Data Exchange (ETDEWEB)
Niu, Juntian [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China); Du, Xuesen, E-mail: xuesendu@cqu.edu.cn [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China); Ran, Jingyu, E-mail: jyran@189.cn [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China); Wang, Ruirui [Key Laboratory of Low-grade Energy Utilization Technologies and Systems, Ministry of Education of PRC, Chongqing University, Chongqing, 400044 (China); College of Power Engineering, Chongqing University, Chongqing, 400044 (China)
2016-07-15
Graphical abstract: - Highlights: • CH appears to be the most abundant species on Pt(1 1 1) surface in CH{sub 4} dissociation. • CO{sub 2}* + H* → COOH* + * → CO* + OH* is the dominant reaction pathway in CO{sub 2} activation. • Major reaction pathway in CH oxidation: CH* + OH* → CHOH* + * → CHO* + H* → CO* + 2H*. • C* + OH* → COH* + * → CO* + H* is the dominant reaction pathway in C oxidation. - Abstract: Dry reforming of methane (DRM) is a well-studied reaction that is of both scientific and industrial importance. In order to design catalysts that minimize the deactivation and improve the selectivity and activity for a high H{sub 2}/CO yield, it is necessary to understand the elementary reaction steps involved in activation and conversion of CO{sub 2} and CH{sub 4}. In our present work, a microkinetic model based on density functional theory (DFT) calculations is applied to explore the reaction mechanism for methane dry reforming on Pt catalysts. The adsorption energies of the reactants, intermediates and products, and the activation barriers for the elementary reactions involved in the DRM process are calculated over the Pt(1 1 1) surface. In the process of CH{sub 4} direct dissociation, the kinetic results show that CH dissociative adsorption on Pt(1 1 1) surface is the rate-determining step. CH appears to be the most abundant species on the Pt(1 1 1) surface, suggesting that carbon deposition is not easy to form in CH{sub 4} dehydrogenation on Pt(1 1 1) surface. In the process of CO{sub 2} activation, three possible reaction pathways are considered to contribute to the CO{sub 2} decomposition: (I) CO{sub 2}* + * → CO* + O*; (II) CO{sub 2}* + H* → COOH* + * → CO* + OH*; (III) CO{sub 2}* + H* → mono-HCOO* + * → bi-HCOO* + * [CO{sub 2}* + H* → bi-HCOO* + *] → CHO* + O*. Path I requires process to overcome the activation barrier of 1.809 eV and the forward reaction is calculated to be strongly endothermic by 1.430 eV. In
Endo, Kazunaka; Shimada, Shingo; Kato, Nobuhiko; Ida, Tomonori
2016-10-01
We simulated valence X-ray photoelectron spectra (VXPS) of five [(CH2CH(CH3))n {poly(propyrene) PP}, ((CH2CH(C5NH4))n {poly(4-vinyl-pyridine) P4VP}, (CH2CHO(CH3))n {poly(vinyl methyl ether) PVME}, (C6H4S)n {poly(phenylene) sulphide PPS}, (CF2CF2)n {poly(tetrafluoroethylene) PTFE}] polymers by density-functional theory (DFT) calculations using the model oligomers. The spectra reflect the differences in the chemical structures between each polymer, since the peak intensities of valence band spectra are seen to be due to photo-ionization cross-section of (C, N, O, S, F) atoms by considering the orbital energies and cross-section values of the polymer models, individually. In the Auger electron spectra (AES) simulations, theoretical kinetic energies of the AES are obtained with our modified calculation method. The modified kinetic energies correspond to two final-state holes at the ground state and at the transition-state in DFT calculations, respectively. Experimental peaks of (C, N, O)- KVV, and S L2,3VV AES for each polymer are discussed in detail by our modified calculation method.
Modeling Logistic Performance in Quantitative Microbial Risk Assessment
Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.
2010-01-01
In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage ti
Quantitative modelling in design and operation of food supply systems
Beek, van P.
2004-01-01
During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and opera
Lee, Ji-Hwan; Tak, Youngjoo; Lee, Taehun; Soon, Aloysius
Ceria (CeO2-x) is widely studied as a choice electrolyte material for intermediate-temperature (~ 800 K) solid oxide fuel cells. At this temperature, maintaining its chemical stability and thermal-mechanical integrity of this oxide are of utmost importance. To understand their thermal-elastic properties, we firstly test the influence of various approximations to the density-functional theory (DFT) xc functionals on specific thermal-elastic properties of both CeO2 and Ce2O3. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation (GGA-PBE) with and without additional Hubbard U as applied to the 4 f electron of Ce, as well as the recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). Next, we then couple this to a volume-dependent Debye-Grüneisen model to determine the thermodynamic quantities of ceria at arbitrary temperatures. We find an explicit description of the strong correlation (e.g. via the DFT + U and hybrid functional approach) is necessary to have a good agreement with experimental values, in contrast to the mean-field treatment in standard xc approximations (such as LDA or GGA-PBE). We acknowledge support from Samsung Research Funding Center of Samsung Electronics (SRFC-MA1501-03).
A GPGPU accelerated modeling environment for quantitatively characterizing karst systems
Myre, J. M.; Covington, M. D.; Luhmann, A. J.; Saar, M. O.
2011-12-01
The ability to derive quantitative information on the geometry of karst aquifer systems is highly desirable. Knowing the geometric makeup of a karst aquifer system enables quantitative characterization of the systems response to hydraulic events. However, the relationship between flow path geometry and karst aquifer response is not well understood. One method to improve this understanding is the use of high speed modeling environments. High speed modeling environments offer great potential in this regard as they allow researchers to improve their understanding of the modeled karst aquifer through fast quantitative characterization. To that end, we have implemented a finite difference model using General Purpose Graphics Processing Units (GPGPUs). GPGPUs are special purpose accelerators which are capable of high speed and highly parallel computation. The GPGPU architecture is a grid like structure, making it is a natural fit for structured systems like finite difference models. To characterize the highly complex nature of karst aquifer systems our modeling environment is designed to use an inverse method to conduct the parameter tuning. Using an inverse method reduces the total amount of parameter space needed to produce a set of parameters describing a system of good fit. Systems of good fit are determined with a comparison to reference storm responses. To obtain reference storm responses we have collected data from a series of data-loggers measuring water depth, temperature, and conductivity at locations along a cave stream with a known geometry in southeastern Minnesota. By comparing the modeled response to those of the reference responses the model parameters can be tuned to quantitatively characterize geometry, and thus, the response of the karst system.
Park, Jong Hoo; Liu, Tianyuan; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2017-04-10
The thermodynamic and electrochemical redox properties for a set of ketone derivatives of phenalenyl and anthracene have been investigated to assess their potential application for positive electrode materials in rechargeable lithium-ion batteries. Using first-principles DFT, it was found that 1) the thermodynamic stabilities of ketone derivatives are strongly dependent on the distribution of the carbonyl groups and 2) the redox potential is increased when increasing the number of the incorporated carbonyl groups. The highest values are 3.93 V versus Li/Li(+) for the phenalenyl derivatives and 3.82 V versus Li/Li(+) for the anthracene derivatives. It is further highlighted that the redox potential of an organic molecule is also strongly correlated with its spin state in the thermodynamically stable form. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reservoir Stochastic Modeling Constrained by Quantitative Geological Conceptual Patterns
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
This paper discusses the principles of geologic constraints on reservoir stochastic modeling. By using the system science theory, two kinds of uncertainties, including random uncertainty and fuzzy uncertainty, are recognized. In order to improve the precision of stochastic modeling and reduce the uncertainty in realization, the fuzzy uncertainty should be stressed, and the "geological genesis-controlled modeling" is conducted under the guidance of a quantitative geological pattern. An example of the Pingqiao horizontal-well division of the Ansai Oilfield in the Ordos Basin is taken to expound the method of stochastic modeling.
Lessons Learned from Quantitative Dynamical Modeling in Systems Biology
Bachmann, Julie; Matteson, Andrew; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D.; Theis, Fabian J.; Klingmüller, Ursula; Timmer, Jens
2013-01-01
Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642
Lessons learned from quantitative dynamical modeling in systems biology.
Directory of Open Access Journals (Sweden)
Andreas Raue
Full Text Available Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here.
Quantitative Analysis of Polarimetric Model-Based Decomposition Methods
Directory of Open Access Journals (Sweden)
Qinghua Xie
2016-11-01
Full Text Available In this paper, we analyze the robustness of the parameter inversion provided by general polarimetric model-based decomposition methods from the perspective of a quantitative application. The general model and algorithm we have studied is the method proposed recently by Chen et al., which makes use of the complete polarimetric information and outperforms traditional decomposition methods in terms of feature extraction from land covers. Nevertheless, a quantitative analysis on the retrieved parameters from that approach suggests that further investigations are required in order to fully confirm the links between a physically-based model (i.e., approaches derived from the Freeman–Durden concept and its outputs as intermediate products before any biophysical parameter retrieval is addressed. To this aim, we propose some modifications on the optimization algorithm employed for model inversion, including redefined boundary conditions, transformation of variables, and a different strategy for values initialization. A number of Monte Carlo simulation tests for typical scenarios are carried out and show that the parameter estimation accuracy of the proposed method is significantly increased with respect to the original implementation. Fully polarimetric airborne datasets at L-band acquired by German Aerospace Center’s (DLR’s experimental synthetic aperture radar (E-SAR system were also used for testing purposes. The results show different qualitative descriptions of the same cover from six different model-based methods. According to the Bragg coefficient ratio (i.e., β , they are prone to provide wrong numerical inversion results, which could prevent any subsequent quantitative characterization of specific areas in the scene. Besides the particular improvements proposed over an existing polarimetric inversion method, this paper is aimed at pointing out the necessity of checking quantitatively the accuracy of model-based PolSAR techniques for a
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.
Moray, Neville; Groeger, John; Stanton, Neville
2017-02-01
This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.
A quantitative comparison of Calvin-Benson cycle models.
Arnold, Anne; Nikoloski, Zoran
2011-12-01
The Calvin-Benson cycle (CBC) provides the precursors for biomass synthesis necessary for plant growth. The dynamic behavior and yield of the CBC depend on the environmental conditions and regulation of the cellular state. Accurate quantitative models hold the promise of identifying the key determinants of the tightly regulated CBC function and their effects on the responses in future climates. We provide an integrative analysis of the largest compendium of existing models for photosynthetic processes. Based on the proposed ranking, our framework facilitates the discovery of best-performing models with regard to metabolomics data and of candidates for metabolic engineering.
Quek, Su Ying; Khoo, Khoong Hong
2014-11-18
CONSPECTUS: The emerging field of flexible electronics based on organics and two-dimensional (2D) materials relies on a fundamental understanding of charge and spin transport at the molecular and nanoscale. It is desirable to make predictions and shine light on unexplained experimental phenomena independently of experimentally derived parameters. Indeed, density functional theory (DFT), the workhorse of first-principles approaches, has been used extensively to model charge/spin transport at the nanoscale. However, DFT is essentially a ground state theory that simply guarantees correct total energies given the correct charge density, while charge/spin transport is a nonequilibrium phenomenon involving the scattering of quasiparticles. In this Account, we critically assess the validity and applicability of DFT to predict charge/spin transport at the nanoscale. We also describe a DFT-based approach, DFT+Σ, which incorporates corrections to Kohn-Sham energy levels based on many-electron calculations. We focus on single-molecule junctions and then discuss how the important considerations for DFT descriptions of transport can differ in 2D materials. We conclude that when used appropriately, DFT and DFT-based approaches can play an important role in making predictions and gaining insight into transport in these materials. Specifically, we shall focus on the low-bias quasi-equilibrium regime, which is also experimentally most relevant for single-molecule junctions. The next question is how well can the scattering of DFT Kohn-Sham particles approximate the scattering of true quasiparticles in the junction? Quasiparticles are electrons (holes) that are surrounded by a constantly changing cloud of holes (electrons), but Kohn-Sham particles have no physical significance. However, Kohn-Sham particles can often be used as a qualitative approximation to quasiparticles. The errors in standard DFT descriptions of transport arise primarily from errors in the Kohn-Sham energy levels
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger
Moray, Neville; Groeger, John; Stanton, Neville
2016-01-01
This paper shows how to combine field observations, experimental data, and mathematical modeling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example we consider a major railway accident. In 1999 a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, "black box" data, and accident and engineering reports, to construct a case history of the accident. We show how t...
Quantitative metal magnetic memory reliability modeling for welded joints
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
Quantitative versus qualitative modeling: a complementary approach in ecosystem study.
Bondavalli, C; Favilla, S; Bodini, A
2009-02-01
Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.
Energy Technology Data Exchange (ETDEWEB)
He, Yan Bin [School of Chemistry and Materials Science, Shanxi Normal University, Linfen 041004 (China); Pharmaceutical Department, Changzhi Medical College, Changzhi 046000 (China); Jia, Jian Feng, E-mail: jiajf@dns.sxnu.edu.cn [School of Chemistry and Materials Science, Shanxi Normal University, Linfen 041004 (China); Wu, Hai Shun, E-mail: wuhs@mail.sxnu.edu.cn [School of Chemistry and Materials Science, Shanxi Normal University, Linfen 041004 (China)
2015-02-01
Highlights: • We propose a model suitable for simulating the adsorption of hydrazine on rhodium nanoparticles. • We found that inclusion of dispersion correction results in significant enhancement for the adsorption to the Rh(1 1 1) surface. • Nanoparticles surface with lower-coordinated sites are more reactive than those with almost saturated surface sites. - Abstract: In recent years, metal nanoparticles were found to be excellent catalysts for hydrogen generation from hydrazine for chemical hydrogen storage. In order to gain a better understanding of these catalytic systems, we have simulated the adsorption of hydrazine on rhodium nanoparticles surfaces by density functional theory (DFT) calculations with dispersion correction, DFT-D3 in the method of Grimme. The rhodium nanoparticles were modeled by the Rh(1 1 1) surface, in addition, the adsorptions at corners and edges sites of nanoparticles were considered by using rhodium adatoms on the surfaces. The calculations showed that hydrazine binds most strongly to the edge of nanoparticle with adsorption energy of −2.48 eV, where the hydrazine bridges adatoms of edge with the molecule twisted to avoid a cis structure; similar adsorption energy was found at the corner of nanoparticle, where the hydrazine bridges corner atom and surface atom with gauche configuration. However, we found that inclusion of the dispersion correction results in significant enhancement of molecule–substrate binding, thereby increasing the adsorption energy, especially the adsorption to the Rh(1 1 1) surface. The results demonstrate that the surface structure is a key factor to determine the thermodynamics of adsorption, with low coordinated atoms which providing sites of strong adsorption from the surface.
QuantUM: Quantitative Safety Analysis of UML Models
Directory of Open Access Journals (Sweden)
Florian Leitner-Fischer
2011-07-01
Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.
Institute of Scientific and Technical Information of China (English)
周中定; 孙青华; 梁雄健
2003-01-01
In this paper, it presents a data mining model based on Discrete Fourier Transform (DFT)theory for relia-bility data of communication networks, it helps us analyze aberrance data information and provide a method to analy-sis and evaluate reliability data for management of communication network. It's effective and convenient by a practicalapplication.
Quantitative magnetospheric models derived from spacecraft magnetometer data
Mead, G. D.; Fairfield, D. H.
1973-01-01
Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.
Cross-interaction effects of substituents on N-benzylideneanilines conformation: A DFT investigation
Wu, Feng; Fang, Zhengjun; Yi, Bing; Au, Chaktong; Cao, Chenzhong; Huang, Linjie; Xie, Xin
2017-08-01
The conformations of N-benzylideneanilines (X-PhPhCH = NPh-Y) were explored by the B3LYP density functional theory (DFT) hybrid method in combination with the 6-31G* split valence basis set. The crystal structure information of PhPhCH = NPh-OMe was obtained experimentally to assess the accuracy of this DFT approach. It was observed that the twist angle of the benzylidene ring or aniline ring with respect to the rest of the molecule (τ1 or τ2) estimated by the DFT method are highly reliable, and τ2 can be systematically regulated through X and Y substitution. The substituent effects on τ2 obtained from DFT calculations were investigated. The results show that when substituent Y becomes more electron-withdrawing, there is decrease of τ2 (i.e. increase in the distortion of aniline ring with respect to the rest of the molecule). However, substituent X has an opposite effect on τ2. It is demonstrated that substituent cross-interaction has a certain influence on τ2, and a quantitative model is proposed to express such an effect. The findings of the present study illustrate a practical method for expressing the relationship between substituents and molecular conformation of the X-PhPhCH = NPh-Y compounds.
Asynchronous adaptive time step in quantitative cellular automata modeling
Directory of Open Access Journals (Sweden)
Sun Yan
2004-06-01
Full Text Available Abstract Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment.
Model for Quantitative Evaluation of Enzyme Replacement Treatment
Directory of Open Access Journals (Sweden)
Radeva B.
2009-12-01
Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.
Quantitative modeling and data analysis of SELEX experiments
Djordjevic, Marko; Sengupta, Anirvan M.
2006-03-01
SELEX (systematic evolution of ligands by exponential enrichment) is an experimental procedure that allows the extraction, from an initially random pool of DNA, of those oligomers with high affinity for a given DNA-binding protein. We address what is a suitable experimental and computational procedure to infer parameters of transcription factor-DNA interaction from SELEX experiments. To answer this, we use a biophysical model of transcription factor-DNA interactions to quantitatively model SELEX. We show that a standard procedure is unsuitable for obtaining accurate interaction parameters. However, we theoretically show that a modified experiment in which chemical potential is fixed through different rounds of the experiment allows robust generation of an appropriate dataset. Based on our quantitative model, we propose a novel bioinformatic method of data analysis for such a modified experiment and apply it to extract the interaction parameters for a mammalian transcription factor CTF/NFI. From a practical point of view, our method results in a significantly improved false positive/false negative trade-off, as compared to both the standard information theory based method and a widely used empirically formulated procedure.
Quantitative Methods in Supply Chain Management Models and Algorithms
Christou, Ioannis T
2012-01-01
Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...
A Team Mental Model Perspective of Pre-Quantitative Risk
Cooper, Lynne P.
2011-01-01
This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.
Quantitative risk assessment modeling for nonhomogeneous urban road tunnels.
Meng, Qiang; Qu, Xiaobo; Wang, Xinchang; Yuanita, Vivi; Wong, Siew Chee
2011-03-01
Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out.
Quantitative modeling of a gene's expression from its intergenic sequence.
Directory of Open Access Journals (Sweden)
Md Abul Hassan Samee
2014-03-01
Full Text Available Modeling a gene's expression from its intergenic locus and trans-regulatory context is a fundamental goal in computational biology. Owing to the distributed nature of cis-regulatory information and the poorly understood mechanisms that integrate such information, gene locus modeling is a more challenging task than modeling individual enhancers. Here we report the first quantitative model of a gene's expression pattern as a function of its locus. We model the expression readout of a locus in two tiers: 1 combinatorial regulation by transcription factors bound to each enhancer is predicted by a thermodynamics-based model and 2 independent contributions from multiple enhancers are linearly combined to fit the gene expression pattern. The model does not require any prior knowledge about enhancers contributing toward a gene's expression. We demonstrate that the model captures the complex multi-domain expression patterns of anterior-posterior patterning genes in the early Drosophila embryo. Altogether, we model the expression patterns of 27 genes; these include several gap genes, pair-rule genes, and anterior, posterior, trunk, and terminal genes. We find that the model-selected enhancers for each gene overlap strongly with its experimentally characterized enhancers. Our findings also suggest the presence of sequence-segments in the locus that would contribute ectopic expression patterns and hence were "shut down" by the model. We applied our model to identify the transcription factors responsible for forming the stripe boundaries of the studied genes. The resulting network of regulatory interactions exhibits a high level of agreement with known regulatory influences on the target genes. Finally, we analyzed whether and why our assumption of enhancer independence was necessary for the genes we studied. We found a deterioration of expression when binding sites in one enhancer were allowed to influence the readout of another enhancer. Thus, interference
Modeling Error in Quantitative Macro-Comparative Research
Directory of Open Access Journals (Sweden)
Salvatore J. Babones
2015-08-01
Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic but often poorly understood research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.
A quantitative model for integrating landscape evolution and soil formation
Vanwalleghem, T.; Stockmann, U.; Minasny, B.; McBratney, Alex B.
2013-06-01
evolution is closely related to soil formation. Quantitative modeling of the dynamics of soils and landscapes should therefore be integrated. This paper presents a model, named Model for Integrated Landscape Evolution and Soil Development (MILESD), which describes the interaction between pedogenetic and geomorphic processes. This mechanistic model includes the most significant soil formation processes, ranging from weathering to clay translocation, and combines these with the lateral redistribution of soil particles through erosion and deposition. The model is spatially explicit and simulates the vertical variation in soil horizon depth as well as basic soil properties such as texture and organic matter content. In addition, sediment export and its properties are recorded. This model is applied to a 6.25 km2 area in the Werrikimbe National Park, Australia, simulating soil development over a period of 60,000 years. Comparison with field observations shows how the model accurately predicts trends in total soil thickness along a catena. Soil texture and bulk density are predicted reasonably well, with errors of the order of 10%, however, field observations show a much higher organic carbon content than predicted. At the landscape scale, different scenarios with varying erosion intensity result only in small changes of landscape-averaged soil thickness, while the response of the total organic carbon stored in the system is higher. Rates of sediment export show a highly nonlinear response to soil development stage and the presence of a threshold, corresponding to the depletion of the soil reservoir, beyond which sediment export drops significantly.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction
Directory of Open Access Journals (Sweden)
Cobbs Gary
2012-08-01
Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the
Directory of Open Access Journals (Sweden)
Hassan Samuel
2016-03-01
Full Text Available A set of twenty five compounds of polyhalogenated dioxins with toxicity data in EC50 was subjected to quantitative structure activity relationship studies using Material Studio software 7.0. Large number of molecular descriptors was calculated from the level of theory DFT (BLYP/6-31G* and semi-empirical (AM1 using the softwares Spartan 14v1.1.2 and PaDel descriptor. The correlation between the toxicities and the DFT and semi-empirical calculated descriptors was examined. Genetic Function Approximation (GFA technique was used to generate ten QSAR models for each of the two level of theory, out of these models the one with the highest statistical significance was selected as the best for the two methods. DFT (R2 = 0.9516, R2 adj = 0.9389, R2 cv = 0.9091, LOF = 0.5882, significance of regression F-value = 74.8019 and Semi-empirical (R2 = 0.96803, R2 adj = 0.9596, R2 cv = 0.9518, LOF = 0.3877, significance of regression F-value = 115.0703. These descriptors were found to be responsible for the toxicities of polyhalogenated dioxins. DFT (BCUTc-1h, VP-3, SssssGe, 0ETA_dAlpha_B and ETA_BetaP and semi-empirical (EHOMO, SP-7, ETA_Shape_P, ETA_EtaP_L and GRAV-4. From the comparison of the models generated using DFT and semi-empirical and based on their statistical parameters, semi-empirical (AM1 has slightly better predictive power than DFT (BLYP/6-31G*.
Modeling logistic performance in quantitative microbial risk assessment.
Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke
2010-01-01
In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.
Fusing Quantitative Requirements Analysis with Model-based Systems Engineering
Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven
2006-01-01
A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.
Quantitative model studies for interfaces in organic electronic devices
Gottfried, J. Michael
2016-11-01
In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.
Quantitative identification of technological discontinuities using simulation modeling
Park, Hyunseok
2016-01-01
The aim of this paper is to develop and test metrics to quantitatively identify technological discontinuities in a knowledge network. We developed five metrics based on innovation theories and tested the metrics by a simulation model-based knowledge network and hypothetically designed discontinuity. The designed discontinuity is modeled as a node which combines two different knowledge streams and whose knowledge is dominantly persistent in the knowledge network. The performances of the proposed metrics were evaluated by how well the metrics can distinguish the designed discontinuity from other nodes on the knowledge network. The simulation results show that the persistence times # of converging main paths provides the best performance in identifying the designed discontinuity: the designed discontinuity was identified as one of the top 3 patents with 96~99% probability by Metric 5 and it is, according to the size of a domain, 12~34% better than the performance of the second best metric. Beyond the simulation ...
Fusing Quantitative Requirements Analysis with Model-based Systems Engineering
Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven
2006-01-01
A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.
A Quantitative Theory Model of a Photobleaching Mechanism
Institute of Scientific and Technical Information of China (English)
陈同生; 曾绍群; 周炜; 骆清铭
2003-01-01
A photobleaching model:D-P(dye-photon interaction)and D-O(Dye-oxygen oxidative reaction)photobleaching theory,is proposed.The quantitative power dependences of photobleaching rates with both one-and two-photon excitations(1 PE and TPE)are obtained.This photobleaching model can be used to elucidate our and other experimental results commendably.Experimental studies of the photobleaching rates for rhodamine B with TPE under unsaturation conditions reveals that the power dependences of photobleaching rates increase with the increasing dye concentration,and that the photobleaching rate of a single molecule increases in the second power of the excitation intensity,which is different from the high-order(＞ 3)nonlinear dependence of ensemble molecules.
VLSI Architectures for Computing DFT's
Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.
1986-01-01
Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.
An infinitesimal model for quantitative trait genomic value prediction.
Directory of Open Access Journals (Sweden)
Zhiqiu Hu
Full Text Available We developed a marker based infinitesimal model for quantitative trait analysis. In contrast to the classical infinitesimal model, we now have new information about the segregation of every individual locus of the entire genome. Under this new model, we propose that the genetic effect of an individual locus is a function of the genome location (a continuous quantity. The overall genetic value of an individual is the weighted integral of the genetic effect function along the genome. Numerical integration is performed to find the integral, which requires partitioning the entire genome into a finite number of bins. Each bin may contain many markers. The integral is approximated by the weighted sum of all the bin effects. We now turn the problem of marker analysis into bin analysis so that the model dimension has decreased from a virtual infinity to a finite number of bins. This new approach can efficiently handle virtually unlimited number of markers without marker selection. The marker based infinitesimal model requires high linkage disequilibrium of all markers within a bin. For populations with low or no linkage disequilibrium, we develop an adaptive infinitesimal model. Both the original and the adaptive models are tested using simulated data as well as beef cattle data. The simulated data analysis shows that there is always an optimal number of bins at which the predictability of the bin model is much greater than the original marker analysis. Result of the beef cattle data analysis indicates that the bin model can increase the predictability from 10% (multiple marker analysis to 33% (multiple bin analysis. The marker based infinitesimal model paves a way towards the solution of genetic mapping and genomic selection using the whole genome sequence data.
Quantitative modeling of the ionospheric response to geomagnetic activity
Directory of Open Access Journals (Sweden)
T. J. Fuller-Rowell
Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard
Motta, Alessandro; Cannelli, Oliviero; Boccia, Alice; Zanoni, Robertino; Raimondo, Mariarosa; Caldarelli, Aurora; Veronesi, Federico
2015-09-16
We report a combined X-ray photoelectron spectroscopy and theoretical modeling analysis of hybrid functional coatings constituted by fluorinated alkylsilane monolayers covalently grafted on a nanostructured ceramic oxide (Al2O3) thin film deposited on aluminum alloy substrates. Such engineered surfaces, bearing hybrid coatings obtained via a classic sol-gel route, have been previously shown to possess amphiphobic behavior (superhydrophobicity plus oleophobicity) and excellent durability, even under simulated severe working environments. Starting from XPS, SEM, and contact angle results and analysis, and combining it with DFT results, the present investigation offers a first mechanistic explanation at a molecular level of the peculiar properties of the hybrid organic-inorganic coating in terms of composition and surface structural arrangements. Theoretical modeling shows that the active fluorinated moiety is strongly anchored on the alumina sites with single Si-O-Al bridges and that the residual valence of Si is saturated by Si-O-Si bonds which form a reticulation with two vicinal fluoroalkylsilanes. The resulting hybrid coating consists of stable rows of fluorinated alkyl chains in reciprocal contact, which form well-ordered and packed monolayers.
Automated quantitative gait analysis in animal models of movement disorders
Directory of Open Access Journals (Sweden)
Vandeputte Caroline
2010-08-01
Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.
Quantitative model of the growth of floodplains by vertical accretion
Moody, J.A.; Troutman, B.M.
2000-01-01
A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.
Quantitative Model for Estimating Soil Erosion Rates Using 137Cs
Institute of Scientific and Technical Information of China (English)
YANGHAO; GHANGQING; 等
1998-01-01
A quantitative model was developed to relate the amount of 137Cs loss from the soil profile to the rate of soil erosion,According th mass balance model,the depth distribution pattern of 137Cs in the soil profile ,the radioactive decay of 137Cs,sampling year and the difference of 137Cs fallout amount among years were taken into consideration.By introducing typical depth distribution functions of 137Cs into the model ,detailed equations for the model were got for different soil,The model shows that the rate of soil erosion is mainly controlled by the depth distrbution pattern of 137Cs ,the year of sampling,and the percentage reduction in total 137Cs,The relationship between the rate of soil loss and 137Cs depletion i neither linear nor logarithmic,The depth distribution pattern of 137Cs is a major factor for estimating the rate of soil loss,Soil erosion rate is directly related with the fraction of 137Cs content near the soil surface. The influences of the radioactive decay of 137Cs,sampling year and 137Cs input fraction are not large compared with others.
Goal relevance as a quantitative model of human task relevance.
Tanner, James; Itti, Laurent
2017-03-01
The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
MINA HAGHDADI
2010-10-01
Full Text Available In the present work, a quantitative structure–activity relationship (QSAR method was used to predict the psychometric activity values (as mescaline unit, log MU of 48 phenylalkylamine derivatives from their density functional theory (DFT calculated molecular descriptors and an artificial neural network (ANN. In the first step, the molecular descriptors were obtained by DFT calculation at the 6-311G level of theory. Then the stepwise multiple linear regression method was employed to screen the descriptor spaces. In the next step, an artificial neural network and multiple linear regressions (MLR models were developed to construct nonlinear and linear QSAR models, respectively. The standard errors in the prediction of log MU by the MLR model were 0.398, 0.443 and 0.427 for training, internal and external test sets, respectively, while these values for the ANN model were 0.132, 0.197 and 0.202, respectively. The obtained results show the applicability of QSAR approaches by using ANN techniques in prediction of log MU of phenylalkylamine derivatives from their DFT-calculated molecular descriptors.
A Quantitative Model to Estimate Drug Resistance in Pathogens
Directory of Open Access Journals (Sweden)
Frazier N. Baker
2016-12-01
Full Text Available Pneumocystis pneumonia (PCP is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP and sulfamethoxazole (SMX that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR and dihydropteroate synthase (DHPS, respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50 to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX and another organism (Staphylococcus aureus DHFR/TMP. Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms.
Quantitative Modeling of Human-Environment Interactions in Preindustrial Time
Sommer, Philipp S.; Kaplan, Jed O.
2017-04-01
Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical
Choubey, Sanjay K.; Mariadasse, Richard; Rajendran, Santhosh; Jeyaraman, Jeyakanthan
2016-12-01
Overexpression of HDAC1, a member of Class I histone deacetylase is reported to be implicated in breast cancer. Epigenetic alteration in carcinogenesis has been the thrust of research for few decades. Increased deacetylation leads to accelerated cell proliferation, cell migration, angiogenesis and invasion. HDAC1 is pronounced as the potential drug target towards the treatment of breast cancer. In this study, the biochemical potential of 6-aminonicotinamide derivatives was rationalized. Five point pharmacophore model with one hydrogen-bond acceptor (A3), two hydrogen-bond donors (D5, D6), one ring (R12) and one hydrophobic group (H8) was developed using 6-aminonicotinamide derivatives. The pharmacophore hypothesis yielded a 3D-QSAR model with correlation-coefficient (r2 = 0.977, q2 = 0.801) and it was externally validated with (r2pred = 0.929, r2cv = 0.850 and r2m = 0.856) which reveals the statistical significance of the model having high predictive power. The model was then employed as 3D search query for virtual screening against compound libraries (Zinc, Maybridge, Enamine, Asinex, Toslab, LifeChem and Specs) in order to identify novel scaffolds which can be experimentally validated to design future drug molecule. Density Functional Theory (DFT) at B3LYP/6-31G* level was employed to explore the electronic features of the ligands involved in charge transfer reaction during receptor ligand interaction. Binding free energy (ΔGbind) calculation was done using MM/GBSA which defines the affinity of ligands towards the receptor.
Mechanics of neutrophil phagocytosis: experiments and quantitative models.
Herant, Marc; Heinrich, Volkmar; Dembo, Micah
2006-05-01
To quantitatively characterize the mechanical processes that drive phagocytosis, we observed the FcgammaR-driven engulfment of antibody-coated beads of diameters 3 mum to 11 mum by initially spherical neutrophils. In particular, the time course of cell morphology, of bead motion and of cortical tension were determined. Here, we introduce a number of mechanistic models for phagocytosis and test their validity by comparing the experimental data with finite element computations for multiple bead sizes. We find that the optimal models involve two key mechanical interactions: a repulsion or pressure between cytoskeleton and free membrane that drives protrusion, and an attraction between cytoskeleton and membrane newly adherent to the bead that flattens the cell into a thin lamella. Other models such as cytoskeletal expansion or swelling appear to be ruled out as main drivers of phagocytosis because of the characteristics of bead motion during engulfment. We finally show that the protrusive force necessary for the engulfment of large beads points towards storage of strain energy in the cytoskeleton over a large distance from the leading edge ( approximately 0.5 microm), and that the flattening force can plausibly be generated by the known concentrations of unconventional myosins at the leading edge.
Energy Technology Data Exchange (ETDEWEB)
Al Alam, A.F.
2009-06-15
This thesis presents an ab initio study of several classes of intermetallics and their hydrides. These compounds are interesting from both a fundamental and an applied points of view. To achieve this aim two complementary methods, constructed within the DFT, were chosen: (i) pseudo potential based VASP for geometry optimization, structural investigations and electron localization mapping (ELF), and (ii) all-electrons ASW method for a detailed description of the electronic structure, chemical bonding properties following different schemes as well as quantities depending on core electrons such as the hyperfine field. A special interest is given with respect to the interplay between magneto-volume and chemical interactions (metal-H) effects within the following hydrided systems: binary Laves (e.g. ScFe{sub 2}) and Haucke (e.g. LaNi{sub 5}) phases on one hand, and ternary cerium based (e.g. CeRhSn) and uranium based (e.g. U{sub 2}Ni{sub 2}Sn) alloys on the other hand. (author)
Generalized gravity from modified DFT
Sakatani, Yuho; Yoshida, Kentaroh
2016-01-01
Recently, generalized equations of type IIB supergravity have been derived from the requirement of classical kappa-symmetry of type IIB superstring theory in the Green-Schwarz formulation. These equations are covariant under generalized T-duality transformations and hence one may expect a formulation similar to double field theory (DFT). In this paper, we consider a modification of the DFT equations of motion by relaxing a condition for the generalized covariant derivative with an extra generalized vector. In this modified double field theory (mDFT), we show that the flatness condition of the modified generalized Ricci tensor leads to the NS-NS part of the generalized equations of type IIB supergravity. In particular, the extra vector fields appearing in the generalized equations correspond to the extra generalized vector in mDFT. We also discuss duality symmetries and a modification of the string charge in mDFT.
Quantitative Modeling of the Alternative Pathway of the Complement System.
Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios
2016-01-01
The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection.
A quantitative model of technological catch-up
Directory of Open Access Journals (Sweden)
Hossein Gholizadeh
2015-02-01
Full Text Available This article presents a quantitative model for the analysis of technological gap. The rates of development of technological leaders and followers in nanotechnology are expressed in terms of coupled equations. On the basis of this model (first step comparative technological gap and rate of that will be studied. We can calculate the dynamics of the gap between leader and follower. In the Second step, we estimate the technology gap using the metafrontier approach. Then we test the relationship between the technology gap and the quality of dimensions of the Catch-up technology which were identified in previous step. The usefulness of this approach is then demonstrated in the analysis of the technological gap of nanotechnology in Iran, the leader in Middle East and the world. We shall present the behaviors of the technological leader and followers. At the end, analyzing Iran position will be identified and implying effective dimension of catch-up Suggestions will be offered which could be a fundamental for long-term policies of Iran.
A quantitative model for assessing community dynamics of pleistocene mammals.
Lyons, S Kathleen
2005-06-01
Previous studies have suggested that species responded individualistically to the climate change of the last glaciation, expanding and contracting their ranges independently. Consequently, many researchers have concluded that community composition is plastic over time. Here I quantitatively assess changes in community composition over broad timescales and assess the effect of range shifts on community composition. Data on Pleistocene mammal assemblages from the FAUNMAP database were divided into four time periods (preglacial, full glacial, postglacial, and modern). Simulation analyses were designed to determine whether the degree of change in community composition is consistent with independent range shifts, given the distribution of range shifts observed. Results indicate that many of the communities examined in the United States were more similar through time than expected if individual range shifts were completely independent. However, in each time transition examined, there were areas of nonanalogue communities. I conducted sensitivity analyses to explore how the results were affected by the assumptions of the null model. Conclusions about changes in mammalian distributions and community composition are robust with respect to the assumptions of the model. Thus, whether because of biotic interactions or because of common environmental requirements, community structure through time is more complex than previously thought.
Welch, William R W; Kubelka, Jan
2012-09-01
A continuum solvent model was tested for simulations of amide I' IR spectra for a 40-residue subdomain of P22 viral coat protein in aqueous solution. Spectra obtained using DFT (BPW91/6-31G**) parameters for a reduced all-Ala representation of the protein were corrected by an electrostatic potential map obtained from the solvent cavity surface and AMBER99 side-chain atom partial charges. Various cavity sizes derived from van der Waals atomic radii with an added effective solvent radius up to 2.0 Å were tested. The interplay of the side-chain and solvent electrostatic effects was investigated by considering the side chains and solvent separately as well as together. The sensitivity to side-chain conformational fluctuations and to the parametrization of C(β) group partial charges was also tested. Simulation results were compared to the experimental amide I' spectra of P22 subdomain, including two (13)C isotopically edited variants, as well as to the previous simulations based on the molecular dynamics trajectory in explicit solvent. For small cavity sizes, between van der Waals and that with added solvent radius of 0.5 Å, better qualitative agreement with experiment was obtained than with the explicit solvent representation, in particular for the (13)C-labeled spectra. Larger protein cavities led to progressively worse predictions due to increasingly stronger electrostatic effects of side chains, which could no longer be well compensated for by the solvent potential. Balance between side-chain and solvent electrostatic effects is important in determining the width and shape of the simulated amide I', which is also virtually unaffected by side-chain-geometry fluctuations. The continuum solvent model combined with the electrostatic map is a computationally efficient and potentially robust approach for the simulations of IR spectra of proteins in solution.
Quantitative comparisons of analogue models of brittle wedge dynamics
Schreurs, Guido
2010-05-01
Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments
DEFF Research Database (Denmark)
Kuhlman, Thomas Scheby; Mikkelsen, Kurt V.; Møller, Klaus Braagaard;
2009-01-01
We present a study on the excited states of an ethylene dimer as to investigate the presence of and perturbation from low-lying charge-resonance states calculated by linear response density functional theory (DFT) using the B3LYP and CAM-B3LYP functionals. The calculations are compared to a refer......We present a study on the excited states of an ethylene dimer as to investigate the presence of and perturbation from low-lying charge-resonance states calculated by linear response density functional theory (DFT) using the B3LYP and CAM-B3LYP functionals. The calculations are compared...
Quantitative property-structural relation modeling on polymeric dielectric materials
Wu, Ke
Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix
Sini, Gjergji
2011-03-08
We have evaluated the performance of several density functional theory (DFT) functionals for the description of the ground-state electronic structure and charge transfer in donor/acceptor complexes. The tetrathiafulvalene- tetracyanoquinodimethane (TTF-TCNQ) complex has been considered as a model test case. Hybrid functionals have been chosen together with recently proposed long-range corrected functionals (ωB97X, ωB97X-D, LRC-ωPBEh, and LC-ωPBE) in order to assess the sensitivity of the results to the treatment and magnitude of exact exchange. The results show an approximately linear dependence of the ground-state charge transfer with the HOMO TTF-LUMOTCNQ energy gap, which in turn depends linearly on the percentage of exact exchange in the functional. The reliability of ground-state charge transfer values calculated in the framework of a monodeterminantal DFT approach was also examined. © 2011 American Chemical Society.
Toward Quantitative Coarse-Grained Models of Lipids with Fluids Density Functional Theory.
Frink, Laura J Douglas; Frischknecht, Amalie L; Heroux, Michael A; Parks, Michael L; Salinger, Andrew G
2012-04-10
We describe methods to determine optimal coarse-grained models of lipid bilayers for use in fluids density functional theory (fluids-DFT) calculations. Both coarse-grained lipid architecture and optimal parametrizations of the models based on experimental measures are discussed in the context of dipalmitoylphosphatidylcholine (DPPC) lipid bilayers in water. The calculations are based on a combination of the modified-iSAFT theory for bonded systems and an accurate fundamental measures theory (FMT) for hard sphere reference fluids. We furthermore discuss a novel approach for pressure control in the fluids-DFT calculations that facilitates both partitioning studies and zero tension control for the bilayer studies. A detailed discussion of the numerical implementations for both solvers and pressure control capabilities are provided. We show that it is possible to develop a coarse-grained lipid bilayer model that is consistent with experimental properties (thickness and area per lipid) of DPPC provided that the coarse-graining is not too extreme. As a final test of the model, we find that the predicted area compressibility moduli and lateral pressure profiles of the optimized models are in reasonable agreement with prior results.
Quantitative Model for Supply Chain Visibility: Process Capability Perspective
Directory of Open Access Journals (Sweden)
Youngsu Lee
2016-01-01
Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.
Wałęsa, Roksana; Man, Dariusz; Engel, Grzegorz; Siodłak, Dawid; Kupka, Teobald; Ptak, Tomasz; Broda, Małgorzata A
2015-07-01
Electron spin resonance (ESR), (1) H-NMR, voltage and resistance experiments were performed to explore structural and dynamic changes of Egg Yolk Lecithin (EYL) bilayer upon addition of model peptides. Two of them are phenylalanine (Phe) derivatives, Ac-Phe-NHMe (1) and Ac-Phe-NMe2 (2), and the third one, Ac-(Z)-ΔPhe-NMe2 (3), is a derivative of (Z)-α,β-dehydrophenylalanine. The ESR results revealed that all compounds reduced the fluidity of liposome's membrane, and the highest activity was observed for compound 2 with N-methylated C-terminal amide bond (Ac-Phe-NMe2 ). This compound, being the most hydrophobic, penetrates easily through biological membranes. This was also observed in voltage and resistance studies. (1) H-NMR studies provided a sound evidence on H-bond interactions between the studied diamides and lecithin polar head. The most significant changes in H-atom chemical shifts and spin-lattice relaxation times T1 were observed for compound 1. Our experimental studies were supported by theoretical calculations. Complexes EYLAc-Phe-NMe2 and EYLAc-(Z)-ΔPhe-NMe2 , stabilized by NH⋅⋅⋅O or/and CH⋅⋅⋅O H-bonds were created and optimized at M06-2X/6-31G(d) level of theory in vacuo and in H2 O environment. According to our molecular-modeling studies, the most probable lecithin site of H-bond interaction with studied diamides is the negatively charged O-atom in phosphate group which acts as H-atom acceptor. Moreover, the highest binding energy to hydrocarbon chains were observed in the case of Ac-Phe-NMe2 (2). Copyright © 2015 Verlag Helvetica Chimica Acta AG, Zürich.
Epistasis analysis for quantitative traits by functional regression model.
Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao
2014-06-01
The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study.
Mihaylov, Tz.; Trendafilova, N.; Kostova, I.; Georgieva, I.; Bauer, G.
2006-09-01
The binding mode of coumarin-3-carboxylic acid (HCCA) to La(III) is elucidated at experimental and theoretical level. The complexation ability of the deprotonated ligand (CCA -) to La(III) is studied using elemental analysis, DTA and TGA data as well as FTIR, 1H NMR and 13C NMR spectra. The experimental data suggest the complex formula La(CCA) 2(NO 3)(H 2O) 2. B3LYP, BHLYP, B3P86, B3PW91, PW91P86 and MPW1PW91 functionals are tested for geometry and frequency calculations of the neutral ligand and all of them show bond length deviations bellow 1%. B3LYP/6-31G(d) level combined with large quasi-relativistic effective core potential for lanthanum is selected to describe the molecular, electronic and vibrational structures as well as the conformational behavior of HCCA, CCA - and La-CCA complex. The metal-ligand binding mode is predicted through molecular modeling and energy estimation of different La-CCA structures. The calculated atomic charges and the bonding orbital polarizations point to strong ionic metal-ligand bonding in La-CCA complex and insignificant donor acceptor interaction. Detailed vibrational analysis of HCCA, CCA - and La(CCA) 2(NO 3)(H 2O) 2 systems based on both calculated and experimental frequencies confirms the suggested metal-ligand binding mode.
Energy Technology Data Exchange (ETDEWEB)
Friebel, Daniel; Viswanathan, Venkatasubramanian; Miller, Daniel James; Anniyev, Toyli; Ogasawara, Hirohito; Larsen, Ask Hjorth; O' Grady, Christopher P.; Norskov, Jens K.; Nilsson, Anders
2012-05-31
We have studied the effect of nanostructuring in Pt monolayer model electrocatalysts on a Rh(111) single-crystal substrate on the adsorption strength of chemisorbed species. In situ high energy resolution fluorescence detection X-ray absorption spectroscopy at the Pt L(3) edge reveals characteristic changes of the shape and intensity of the 'white-line' due to chemisorption of atomic hydrogen (H(ad)) at low potentials and oxygen-containing species (O/OH(ad)) at high potentials. On a uniform, two-dimensional Pt monolayer grown by Pt evaporation in ultrahigh vacuum, we observe a significant destabilization of both H(ad) and O/OH(ad) due to strain and ligand effects induced by the underlying Rh(111) substrate. When Pt is deposited via a wet-chemical route, by contrast, three-dimensional Pt islands are formed. In this case, strain and Rh ligand effects are balanced with higher local thickness of the Pt islands as well as higher defect density, shifting H and OH adsorption energies back toward pure Pt. Using density functional theory, we calculate O adsorption energies and corresponding local ORR activities for fcc 3-fold hollow sites with various local geometries that are present in the three-dimensional Pt islands.
Energy Technology Data Exchange (ETDEWEB)
Mihaylov, Tz. [Institute of General and Inorganic Chemistry, Bulgarian Academy of Sciences, 1113 Sofia (Bulgaria); Trendafilova, N. [Institute of General and Inorganic Chemistry, Bulgarian Academy of Sciences, 1113 Sofia (Bulgaria)], E-mail: ntrend@svr.igic.bas.bg; Kostova, I. [Department of Chemistry, Faculty of Pharmacy, Medical University, Sofia 1000 (Bulgaria); Georgieva, I. [Institute of General and Inorganic Chemistry, Bulgarian Academy of Sciences, 1113 Sofia (Bulgaria); Bauer, G. [Institute of Chemical Technologies and Analytics, Technical University of Vienna, Vienna A-1060 (Austria)
2006-09-11
The binding mode of coumarin-3-carboxylic acid (HCCA) to La(III) is elucidated at experimental and theoretical level. The complexation ability of the deprotonated ligand (CCA{sup -}) to La(III) is studied using elemental analysis, DTA and TGA data as well as FTIR, {sup 1}H NMR and {sup 13}C NMR spectra. The experimental data suggest the complex formula La(CCA){sub 2}(NO{sub 3})(H{sub 2}O){sub 2}. B3LYP, BHLYP, B3P86, B3PW91, PW91P86 and MPW1PW91 functionals are tested for geometry and frequency calculations of the neutral ligand and all of them show bond length deviations bellow 1%. B3LYP/6-31G(d) level combined with large quasi-relativistic effective core potential for lanthanum is selected to describe the molecular, electronic and vibrational structures as well as the conformational behavior of HCCA, CCA{sup -} and La-CCA complex. The metal-ligand binding mode is predicted through molecular modeling and energy estimation of different La-CCA structures. The calculated atomic charges and the bonding orbital polarizations point to strong ionic metal-ligand bonding in La-CCA complex and insignificant donor acceptor interaction. Detailed vibrational analysis of HCCA, CCA{sup -} and La(CCA){sub 2}(NO{sub 3})(H{sub 2}O){sub 2} systems based on both calculated and experimental frequencies confirms the suggested metal-ligand binding mode.
Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy
Smith, Rachel; Cantrell, Kevin
2007-01-01
Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.
DFT modeling on the suitable crown ether architecture for complexation with Cs⁺ and Sr²⁺ metal ions.
Boda, Anil; Ali, Sk Musharaf; Shenoi, Madhav R K; Rao, Hanmanth; Ghosh, Sandip K
2011-05-01
Crown ether architectures were explored for the inclusion of Cs(+) and Sr(2+) ions within nano-cavity of macrocyclic crown ethers using density functional theory (DFT) modeling. The modeling was undertaken to gain insight into the mechanism of the complexation of Cs(+) and Sr(2+) ion with this ligand experimentally. The selectivity of Cs(+) and Sr(2+) ions for a particular size of crown ether has been explained based on the fitting and binding interaction of the guest ions in the narrow cavity of crown ethers. Although, Di-Benzo-18-Crown-6 (DB18C6) and Di-Benzo-21-Crown-7 (DB21C7) provide suitable host architecture for Sr(2+) and Cs(+) ions respectively as the ion size match with the cavity of the host, but consideration of binding interaction along with the cavity matching both DB18C6 and DB21C7 prefers Sr(2+) ion. The calculated values of binding enthalpy of Cs metal ion with the crown ethers were found to be in good agreement with the experimental results. The gas phase binding enthalpy for Sr(2+) ion with crown ether was higher than Cs metal ion. The ion exchange reaction between Sr and Cs always favors the selection of Sr metal ion both in the gas and in micro-solvated systems. The gas phase selectivity remains unchanged in micro-solvated phase. We have demonstrated the effect of micro-solvation on the binding interaction between the metal ions (Cs(+) and Sr(2+)) and the macrocyclic crown ethers by considering micro-solvated metal ions up to eight water molecules directly attached to the metal ion and also by considering two water molecules attached to metal-ion-crown ether complexes. A metal ion exchange reaction involving the replacement of strontium ion in metal ion-crown ether complexes with cesium ion contained within a metal ion-water cluster serves as the basis for modeling binding preferences in solution. The calculated O-H stretching frequency of H(2)O molecule in micro-solvated metal ion-crown complexes is more red-shifted in comparison to hydrated
DEFF Research Database (Denmark)
ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto
2015-01-01
We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...
Herd immunity and pneumococcal conjugate vaccine: a quantitative model.
Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S
2007-07-20
Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non
Liquid Methanol from DFT and DFT/MM Molecular Dynamics Simulations.
Sieffert, Nicolas; Bühl, Michael; Gaigeot, Marie-Pierre; Morrison, Carole A
2013-01-08
We present a comparative study of computational protocols for the description of liquid methanol from ab initio molecular dynamics simulations, in view of further applications directed at the modeling of chemical reactivity of organic and organometallic molecules in (explicit) methanol solution. We tested density functional theory molecular dynamics (DFT-MD) in its Car-Parrinello Molecular Dynamics (CPMD) and Quickstep/Born-Oppenheimer MD (CP2K) implementations, employing six popular density functionals with and without corrections for dispersion interactions (namely BLYP, BLYP-D2, BLYP-D3, BP86, BP86-D2, and B97-D2). Selected functionals were also tested within the two QM/MM frameworks implemented in CPMD and CP2K, considering one DFT molecule in a MM environment (described by the OPLS model of methanol). The accuracy of each of these methods at describing the bulk liquid phase under ambient conditions was evaluated by analyzing their ability to reproduce (i) the average structure of the liquid, (ii) the mean squared displacement of methanol molecules, (iii) the average molecular dipole moments, and (iv) the gas-to-liquid red-shift observed in their infrared spectra. We show that it is difficult to find a DFT functional that describes these four properties equally well within full DFT-MD simulations, despite a good overall performance of B97-D2. On the other hand, DFT/MM-MD provides a satisfactory description of the solvent-solute polarization effects with all functionals and thus represents a good alternative for the modeling of methanol solutions in the context of chemical reactivity in an explicit environment.
A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model
2007-06-01
12TH ICCRTS “Adapting C2 to the 21st Century” A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model Tracks...Lenahan2 identified metrics and techniques for adversarial C2 process modeling . We intend to further that work by developing a set of adversarial process ...Approaches in an Adversarial Process Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK
Liu, Hui; Liu, Hongxia; Sun, Ping; Wang, Zunyao
2014-11-01
The bioconcentration factors (BCFs) of 58 polychlorinated biphenyls (PCBs) were modeled by quantitative structure-activity relationship (QSAR) using density functional theory (DFT), the position of Cl substitution (PCS) and comparative molecular field analysis (CoMFA) methods. All the models were robust and predictive, and especially, the best CoMFA model was significant with a correlation coefficient (R(2)) of 0.926, a cross-validation correlation coefficient (Q(2)) of 0.821 and a root mean square error estimated (RMSE) of 0.235. The results indicate that the electrostatic descriptors play a more significant role in BCFs of PCBs. Additionally, a test set was used to compare the predictive ability of our models to others, and results show that our CoMFA model present the lowest RMSE. Thus, the models obtain in this work can be used to predict the BCFs of remaining 152 PCBs without available experimental values.
ASSETS MANAGEMENT - A CONCEPTUAL MODEL DECOMPOSING VALUE FOR THE CUSTOMER AND A QUANTITATIVE MODEL
Directory of Open Access Journals (Sweden)
Susana Nicola
2015-03-01
Full Text Available In this paper we describe de application of a modeling framework, the so-called Conceptual Model Decomposing Value for the Customer (CMDVC, in a Footwear Industry case study, to ascertain the usefulness of this approach. The value networks were used to identify the participants, both tangible and intangible deliverables/endogenous and exogenous assets, and the analysis of their interactions as the indication for an adequate value proposition. The quantitative model of benefits and sacrifices, using the Fuzzy AHP method, enables the discussion of how the CMDVC can be applied and used in the enterprise environment and provided new relevant relations between perceived benefits (PBs.
Nazarski, Ryszard B; Wałejko, Piotr; Witkowski, Stanisław
2016-03-21
Overall conformations of both anomeric per-O-acetylated glucosyl derivatives of 2,2,5,7,8-pentamethylchroman-6-ol were studied in the context of their high flexibility, on the basis of NMR spectra in CDCl3 solution and related DFT calculation results. A few computational protocols were used, including diverse density functional/basis set combinations with a special emphasis on accounting (at various steps of the study) for the impact of intramolecular London-dispersion (LD) effects on geometries and relative Gibbs free energies (ΔGs) of different conformers coexisting in solution. The solvent effect was simulated by an IEF-PCM approach with the UFF radii; its other variants, including the use of the recently introduced IDSCRF radii, were employed for a few compact B3LYP-GD3BJ optimized structures showing one small imaginary vibrational frequency. The advantage of using IDSCRF radii for such purposes was shown. Of the four tested DFT methods, only the application of the B3LYP/6-31+G(d,p) approximation afforded ensembles of 7-8 single forms for which population-average values of computed NMR parameters (δH, δC and some (n)JHH data) were in close agreement with those measured experimentally; binuclear (δH,C 1 : 1) correlations, rH,C(2) = 0.9998. The associated individual ΔG values, corrected for LD interactions by applying Grimme's DFT-D3 terms, afforded relative contents of different contributors to the analyzed conformational families in much better agreement with pertinent DFT/NMR-derived populations (i.e., both data sets were found to be practically equal within the limits of estimated errors) than those calculated from dispersion uncorrected ΔGs. All these main findings were confirmed by additional results obtained at the MP2 level of theory. Various other aspects of the study such as the crystal vs. solution structure, gg/gt rotamer ratio, diagnostic (de)shielding effects, dihydrogen C-H···H-C contacts, and doubtful applicability of some specialized
Simakov, Andrei N; Chacón, L
2008-09-05
Dissipation-independent, or "fast", magnetic reconnection has been observed computationally in Hall magnetohydrodynamics (MHD) and predicted analytically in electron MHD. However, a quantitative analytical theory of reconnection valid for arbitrary ion inertial lengths, d{i}, has been lacking and is proposed here for the first time. The theory describes a two-dimensional reconnection diffusion region, provides expressions for reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and d{i}. It also confirms the electron MHD prediction that both open and elongated diffusion regions allow fast reconnection, and reveals strong dependence of the reconnection rates on d{i}.
2011-05-18
... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... development of regulatory guidance for using risk information related to digital systems in the...
Computing m DFT's over GF(q) with one DFT over GF(q^m)
Hong, Jonathan; Vetterli, Martin
1993-01-01
Over the field of complex numbers, it is well-known that if the input is real then it is possible to compute 2 real DFT's with one complex DFT. We extend the result to finite fields and show how to compute m DFT's over GF(q) with one DFT over GF(qm)
Photon-tissue interaction model for quantitative assessment of biological tissues
Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann
2014-02-01
In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.
Modular System Modeling for Quantitative Reliability Evaluation of Technical Systems
Directory of Open Access Journals (Sweden)
Stephan Neumann
2016-01-01
Full Text Available In modern times, it is necessary to offer reliable products to match the statutory directives concerning product liability and the high expectations of customers for durable devices. Furthermore, to maintain a high competitiveness, engineers need to know as accurately as possible how long their product will last and how to influence the life expectancy without expensive and time-consuming testing. As the components of a system are responsible for the system reliability, this paper introduces and evaluates calculation methods for life expectancy of common machine elements in technical systems. Subsequently, a method for the quantitative evaluation of the reliability of technical systems is proposed and applied to a heavy-duty power shift transmission.
Improvement of the ID model for quantitative network data
DEFF Research Database (Denmark)
Sørensen, Peter Borgen; Damgaard, Christian Frølund; Dupont, Yoko Luise
2015-01-01
)1. This presentation will illustrate the application of the ID method based on a data set which consists of counts of visits by 152 pollinator species to 16 plant species. The method is based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi...... reproduce the high number of zero valued cells in the data set and mimic the sampling distribution. 1 Sørensen et al, Journal of Pollination Ecology, 6(18), 2011, pp129-139......Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks...
Quantitative assessment of meteorological and tropospheric Zenith Hydrostatic Delay models
Zhang, Di; Guo, Jiming; Chen, Ming; Shi, Junbo; Zhou, Lv
2016-09-01
Tropospheric delay has always been an important issue in GNSS/DORIS/VLBI/InSAR processing. Most commonly used empirical models for the determination of tropospheric Zenith Hydrostatic Delay (ZHD), including three meteorological models and two empirical ZHD models, are carefully analyzed in this paper. Meteorological models refer to UNB3m, GPT2 and GPT2w, while ZHD models include Hopfield and Saastamoinen. By reference to in-situ meteorological measurements and ray-traced ZHD values of 91 globally distributed radiosonde sites, over a four-years period from 2010 to 2013, it is found that there is strong correlation between errors of model-derived values and latitudes. Specifically, the Saastamoinen model shows a systematic error of about -3 mm. Therefore a modified Saastamoinen model is developed based on the "best average" refractivity constant, and is validated by radiosonde data. Among different models, the GPT2w and the modified Saastamoinen model perform the best. ZHD values derived from their combination have a mean bias of -0.1 mm and a mean RMS of 13.9 mm. Limitations of the present models are discussed and suggestions for further improvements are given.
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...
Quantitative statistical assessment of conditional models for synthetic aperture radar.
DeVore, Michael D; O'Sullivan, Joseph A
2004-02-01
Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.
A Quantitative Causal Model Theory of Conditional Reasoning
Fernbach, Philip M.; Erb, Christopher D.
2013-01-01
The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…
Towards the quantitative evaluation of visual attention models.
Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K
2015-11-01
Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.
Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)
Sapiano, M. R.
2010-12-01
Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.
A DFT-based QSAR study on inhibition of human dihydrofolate reductase.
Karabulut, Sedat; Sizochenko, Natalia; Orhan, Adnan; Leszczynski, Jerzy
2016-11-01
Diaminopyrimidine derivatives are frequently used as inhibitors of human dihydrofolate reductase, for example in treatment of patients whose immune system are affected by human immunodeficiency virus. Forty-seven dicyclic and tricyclic potential inhibitors of human dihydrofolate reductase were analyzed using the quantitative structure-activity analysis supported by DFT-based and DRAGON-based descriptors. The developed model yielded an RMSE deviation of 1.1 a correlation coefficient of 0.81. The prediction set was characterized by R(2)=0.60 and RMSE=3.59. Factors responsible for inhibition process were identified and discussed. The resulting model was validated via cross validation and Y-scrambling procedure. From the best model, we found several mass-related descriptors and Sanderson electronegativity-related descriptors that have the best correlations with the investigated inhibitory concentration. These descriptors reflect results from QSAR studies based on characteristics of human dihydrofolate reductase inhibitors.
Quantitative Methods for Comparing Different Polyline Stream Network Models
Energy Technology Data Exchange (ETDEWEB)
Danny L. Anderson; Daniel P. Ames; Ping Yang
2014-04-01
Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.
Digital clocks: simple Boolean models can quantitatively describe circadian systems.
Akman, Ozgur E; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J; Ghazal, Peter
2012-09-07
The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day-night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we anticipate
Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics
Scheuerer, Michael
2013-01-01
Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...
Quantitative modeling of degree-degree correlation in complex networks
Niño, Alfonso
2013-01-01
This paper presents an approach to the modeling of degree-degree correlation in complex networks. Thus, a simple function, \\Delta(k', k), describing specific degree-to- degree correlations is considered. The function is well suited to graphically depict assortative and disassortative variations within networks. To quantify degree correlation variations, the joint probability distribution between nodes with arbitrary degrees, P(k', k), is used. Introduction of the end-degree probability function as a basic variable allows using group theory to derive mathematical models for P(k', k). In this form, an expression, representing a family of seven models, is constructed with the needed normalization conditions. Applied to \\Delta(k', k), this expression predicts a nonuniform distribution of degree correlation in networks, organized in two assortative and two disassortative zones. This structure is actually observed in a set of four modeled, technological, social, and biological networks. A regression study performed...
Quantitative modeling of selective lysosomal targeting for drug design
DEFF Research Database (Denmark)
Trapp, Stefan; Rosania, G.; Horobin, R.W.;
2008-01-01
Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers....... This demonstrates that the cell model can be a useful tool for the design of effective lysosome-targeting drugs with minimal off-target interactions....
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.
2011-01-01
This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode
Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.
2011-01-01
This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode
Statistical analysis of probabilistic models of software product lines with quantitative constraints
DEFF Research Database (Denmark)
Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto
2015-01-01
We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...
Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.
2011-01-01
This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the
A suite of models to support the quantitative assessment of spread in pest risk analysis
Robinet, C.; Kehlenbeck, H.; Werf, van der W.
2012-01-01
In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three
Gauthier, Sébastien; Caro, Bertrand; Robin-Le Guen, Françoise; Bhuvanesh, Nattamai; Gladysz, John A; Wojcik, Laurianne; Le Poul, Nicolas; Planchat, Aurélien; Pellegrin, Yann; Blart, Errol; Jacquemin, Denis; Odobel, Fabrice
2014-08-07
In this joint experimental-theoretical work, we present the synthesis and optical and electrochemical characterization of five new bis-acetylide platinum complex dyes end capped with diphenylpyranylidene moieties, as well as their performances in dye-sensitized solar cells (DSCs). Theoretical calculations relying on Time-Dependent Density Functional Theory (TD-DFT) and a range-separated hybrid show a very good match with experimental data and allow us to quantify the charge-transfer character of each compound. The photoconversion efficiency obtained reaches 4.7% for 8e (see TOC Graphic) with the tri-thiophene segment, which is among the highest efficiencies reported for platinum complexes in DSCs.
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
The power of a good idea: quantitative modeling of the spread of ideas from epidemiological models
Bettencourt, L M A; Kaiser, D I; Castillo-Chavez, C; Bettencourt, Lu\\'{i}s M.A.; Cintr\\'{o}n-Arias, Ariel; Kaiser, David I.; Castillo-Ch\\'{a}vez, Carlos
2005-01-01
The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the thr...
Process of quantitative evaluation of validity of rock cutting model
Directory of Open Access Journals (Sweden)
Jozef Futó
2012-12-01
Full Text Available Most of complex technical systems, including the rock cutting process, are very difficult to describe mathematically due to limitedhuman recognition abilities depending on achieved state in natural sciences and technology. A confrontation between the conception(model and the real system often arises in the investigation ofrock cutting process. Identification represents determinationof the systembased on its input and output in specified system class in a manner to obtain the determined system equivalent to the exploredsystem. Incase of rock cutting, the qualities of the model derived from aconventional energy theory ofrock cutting are compared to thequalitiesof non-standard models obtained byscanning of the acoustic signal as an accompanying effect of the surroundings in the rock cuttingprocess by calculated characteristics ofthe acoustic signal. The paper focuses on optimization using the specific cutting energy andpossibility of optimization using the accompanying acoustic signal, namely by one of itscharacteristics, i.e. volume of totalsignal Mrepresenting the result of the system identification.
Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.
Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H
2012-01-01
There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.
Exploiting linkage disequilibrium in statistical modelling in quantitative genomics
DEFF Research Database (Denmark)
Wang, Lei
Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...
Quantitative phase-field modeling for wetting phenomena.
Badillo, Arnoldo
2015-03-01
A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.
First principles pharmacokinetic modeling: A quantitative study on Cyclosporin
DEFF Research Database (Denmark)
Mošat', Andrej; Lueshen, Eric; Heitzig, Martina
2013-01-01
renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A temperature-constrained cascade correlation network(TCCCN), a back-propagation neural network(BP), and multiple linear regression(MLR) models were applied to quantitative structure-activity relationship(QSAR) modeling, on the basis of a set of 35 nitrobenzene derivatives and their acute toxicities. These structural quantum-chemical descriptors were obtained from the density functional theory(DFT). Stepwise multiple regression analysis was performed and the model was obtained. The value of the calibration correlation coefficient R is 0.925, and the value of cross-validation correlation coefficient R is 0.87. The standard error S=0.308 and the cross-validated(leave-one-out) standard error Scv=0.381. Principal component analysis(PCA) was carried out for parameter selection. RMS errors for training set via TCCCN and BP are 0.067 and 0.095, respectively, and RMS errors for testing set via TCCCN and BP are 0.090 and 0.111, respectively. The results show that TCCCN performs better than BP and MLR.
A quantitative magnetospheric model derived from spacecraft magnetometer data
Mead, G. D.; Fairfield, D. H.
1975-01-01
The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.
Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.
Richards, Jef I.; Preston, Ivan L.
Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…
Essays on Quantitative Marketing Models and Monte Carlo Integration Methods
R.D. van Oest (Rutger)
2005-01-01
textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for addr
Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha
2015-01-01
Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.
Predictions of Physicochemical Properties of Ionic Liquids with DFT
Directory of Open Access Journals (Sweden)
Karl Karu
2016-07-01
Full Text Available Nowadays, density functional theory (DFT-based high-throughput computational approach is becoming more efficient and, thus, attractive for finding advanced materials for electrochemical applications. In this work, we illustrate how theoretical models, computational methods, and informatics techniques can be put together to form a simple DFT-based throughput computational workflow for predicting physicochemical properties of room-temperature ionic liquids. The developed workflow has been used for screening a set of 48 ionic pairs and for analyzing the gathered data. The predicted relative electrochemical stabilities, ionic charges and dynamic properties of the investigated ionic liquids are discussed in the light of their potential practical applications.
Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models
Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.
1979-01-01
The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.
Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models
Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.
1979-01-01
The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.
Quantitative comparisons of satellite observations and cloud models
Wang, Fang
Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46.9% in the one month 1D-Var retrievals examined. To attain better constrained cloud/rain ratios and improved retrieval quality, this study suggests the implementation of higher microwave frequency channels in the 1D-Var algorithm. Cloud Resolving Models (CRMs) offer an important pathway to interpret satellite observations of microphysical properties of storms. High frequency microwave brightness temperatures (Tbs) respond to precipitating-sized ice particles and can, therefore, be compared with simulated Tbs at the same frequencies. By clustering the Tb vectors at these frequencies, the scene can be classified into distinct microphysical regimes, in other words, cloud types. The properties for each cloud type in the simulated scene are compared to those in the observation scene to identify the discrepancies in microphysics within that cloud type. A convective storm over the Amazon observed by the Tropical Rainfall Measuring Mission (TRMM) is simulated using the Regional Atmospheric Modeling System (RAMS) in a semi-ideal setting, and four regimes are defined within the scene using cluster analysis: the 'clear sky/thin cirrus' cluster, the 'cloudy' cluster, the 'stratiform anvil' cluster and the 'convective' cluster. The relationship between Tb difference of 37 and 85 GHz and Tb at 85 GHz is found to contain important information of microphysical properties such as hydrometeor species and size distributions. Cluster
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
Quantitative Risk Modeling of Fire on the International Space Station
Castillo, Theresa; Haught, Megan
2014-01-01
The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.
Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.
Directory of Open Access Journals (Sweden)
Bradley J Beattie
Full Text Available There has been recent and growing interest in applying Cerenkov radiation (CR for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.
Software applications toward quantitative metabolic flux analysis and modeling.
Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan
2014-01-01
Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.
TD-DFT and DFT/MRCI study of electronic excitations in Violaxanthin and Zeaxanthin
Götze, Jan Philipp; Thiel, Walter
2013-03-01
We report vibrationally broadened Franck-Condon (FC) spectra of Violaxanthin (Vx) and Zeaxanthin (Zx) for the lowest-energy 1Ag → 1Bu band that arises from the bright HOMO → LUMO single-electron excitation. Geometries were optimized using standard (1Ag) and time-dependent (1Bu) density functional theory (DFT) at the (TD-)CAM-B3LYP/6-31G(d) level, both in the gas phase and in acetone using a polarizable continuum model (PCM). DFT/MRCI multireference calculations were performed at the optimized (TD)-CAM-B3LYP structures to evaluate the energies of doubly excited states that are not accessible to linear response TD-DFT theory. The FC spectra were calculated using the time-independent (TI) scheme. The calculated spectra of Vx and Zx are very similar, with a red shift of about 0.1 eV for Zx relative to Vx, which is in agreement with the experimental data. The predicted spectral peaks of Vx and Zx deviate from experiment by less than 0.1 eV when performing the calculations in the gas phase. In the presence of acetone (PCM model), there are larger deviations so that a state specific correction scheme needs to be applied, which accounts for non-equilibrium solvent relaxation. The 1Ag → 1Bu vertical absorption energies and the corresponding vertical fluorescence energies from TD-CAM-B3LYP and DFT/MRCI agree reasonably well. The DFT/MRCI absorption and fluorescence energies for the doubly excited 2Ag and 2Bu states are found to be rather sensitive to the underlying geometry, in particular to the bond length alternation in the polyene chain. In acetone (PCM), Vx and Zx show little bond alternation, and thus the doubly excited Bu state becomes the lowest excited Bu state. (TD)-CAM-B3LYP appears to be suitable for generating realistic geometries for higher-level calculations in such molecules.
Dal, H A
2012-01-01
Statistically analyzing Johnson UBVR observations of V1285 Aql during the three observing seasons, both activity level and behavior of the star are discussed in respect to obtained results. We also discuss the out-of-flare variation due to rotational modulation. Eighty-three flares were detected in the U-band observations of season 2006 . First, depending on statistical analyses using the independent samples t-test, the flares were divided into two classes as the fast and the slow flares. According to the results of the test, there is a difference of about 73 s between the flare-equivalent durations of slow and fast flares. The difference should be the difference mentioned in the theoretical models. Second, using the one-phase exponential association function, the distribution of the flare-equivalent durations versus the flare total durations was modeled. Analyzing the model, some parameters such as plateau, half-life values, mean average of the flare-equivalent durations, maximum flare rise, and total durati...
Spine curve modeling for quantitative analysis of spinal curvature.
Hay, Ori; Hershkovitz, Israel; Rivlin, Ehud
2009-01-01
Spine curvature and posture are important to sustain healthy back. Incorrect spine configuration can add strain to muscles and put stress on the spine, leading to low back pain (LBP). We propose new method for analyzing spine curvature in 3D, using CT imaging. The proposed method is based on two novel concepts: the spine curvature is derived from spinal canal centerline, and evaluation of the curve is carried out against a model based on healthy individuals. We show results of curvature analysis of healthy population, pathological (scoliosis) patients, and patients having nonspecific chronic LBP.
Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology
DEFF Research Database (Denmark)
Schoof, Erwin; Erler, Janine
phosphorylation dynamics in a given biological sample. In Chapter III, we move into Integrative Network Biology, where, by combining two fundamental technologies (MS & NGS), we can obtain more in-depth insights into the links between cellular phenotype and genotype. Article 4 describes the proof...... cancer networks using Network Biology. Technologies key to this, such as Mass Spectrometry (MS), Next-Generation Sequencing (NGS) and High-Content Screening (HCS) are briefly described. In Chapter II, we cover how signaling networks and mutational data can be modeled in order to gain a better...
Quantitative Model of microRNA-mRNA interaction
Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj
2012-02-01
MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.
Letort, Veronique; Cournède, Paul-Henry; De Reffye, Philippe; Courtois, Brigitte; 10.1093/aob/mcm197
2010-01-01
Background and Aims: Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional-structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods: The GreenLab model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings ...
A quantitative and dynamic model for plant stem cell regulation.
Directory of Open Access Journals (Sweden)
Florian Geier
Full Text Available Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.
Application of non-quantitative modelling in the analysis of a network warfare environment
CSIR Research Space (South Africa)
Veerasamy, N
2008-07-01
Full Text Available of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modelling is a useful method to better characterize the field due to the rich ideas that can be generated...
Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van
1999-01-01
A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by
Quantitative hardware prediction modeling for hardware/software co-design
Meeuws, R.J.
2012-01-01
Hardware estimation is an important factor in Hardware/Software Co-design. In this dissertation, we present the Quipu Modeling Approach, a high-level quantitative prediction model for HW/SW Partitioning using statistical methods. Our approach uses linear regression between software complexity metric
Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...
Douma, J.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Roques, A.; Werf, van der W.
2015-01-01
The aim of this report is to provide EFSA with probabilistic models for quantitative pathway analysis of plant pest introduction for the EU territory through non-edible plant products or plants. We first provide a conceptualization of two types of pathway models. The individual based PM simulates an
Desai, Priyanka Subhash
Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd
High-response piezoelectricity modeled quantitatively near a phase boundary
Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.
2017-01-01
Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.
A quantitative confidence signal detection model: 1. Fitting psychometric functions.
Yi, Yongwoo; Merfeld, Daniel M
2016-04-01
Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. Copyright © 2016 the American Physiological Society.
Toward a quantitative model of metamorphic nucleation and growth
Gaidies, F.; Pattison, D. R. M.; de Capitani, C.
2011-11-01
The formation of metamorphic garnet during isobaric heating is simulated on the basis of the classical nucleation and reaction rate theories and Gibbs free energy dissipation in a multi-component model system. The relative influences are studied of interfacial energy, chemical mobility at the surface of garnet clusters, heating rate and pressure on interface-controlled garnet nucleation and growth kinetics. It is found that the interfacial energy controls the departure from equilibrium required to nucleate garnet if attachment and detachment processes at the surface of garnet limit the overall crystallization rate. The interfacial energy for nucleation of garnet in a metapelite of the aureole of the Nelson Batholith, BC, is estimated to range between 0.03 and 0.3 J/m2 at a pressure of ca. 3,500 bar. This corresponds to a thermal overstep of the garnet-forming reaction of ca. 30°C. The influence of the heating rate on thermal overstepping is negligible. A significant feedback is predicted between chemical fractionation associated with garnet formation and the kinetics of nucleation and crystal growth of garnet giving rise to its lognormal—shaped crystal size distribution.
Impact of implementation choices on quantitative predictions of cell-based computational models
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes
Olsen, Seth
2012-04-01
We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.
A quantitative model of human DNA base excision repair. I. mechanistic insights
Sokhansanj, Bahrad A.; Rodrigue, Garry R.; Fitch, J. Patrick; David M Wilson
2002-01-01
Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts conside...
A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon
Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.
2017-01-01
The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.
Wazzan, Nuha A; Richardson, Patricia R; Jones, Anita C
2010-07-30
In a combined experimental and computational study of a group of para-substituted azobenzenes, the effects of substituents and solvent on the kinetics of thermal cis-to-trans isomerisation have been examined and the success of DFT calculations in predicting kinetic parameters assessed. Mono-substituted species are predicted to isomerise by inversion in both non-polar and polar solvent, whereas for push-pull azobenzenes the mechanism is predicted to change from inversion to rotation on going from non-polar to polar solvent. Computed free energies of activation qualitatively reproduce experimental trends but do not quantitatively predict the kinetics of cis-trans isomerisation. The polarisable continuum model of solvation fails to predict the experimentally observed influence of solvent on the entropy of activation.
Quantitative photoacoustic tomography using forward and adjoint Monte Carlo models of radiance
Hochuli, Roman; Arridge, Simon; Cox, Ben
2016-01-01
Forward and adjoint Monte Carlo (MC) models of radiance are proposed for use in model-based quantitative photoacoustic tomography. A 2D radiance MC model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media. A gradient-based optimisation scheme is then used to recover 2D absorption and scattering coefficients distributions from simulated photoacoustic measurements. It is shown that the functional gradients, which are a challenge to compute efficiently using MC models, can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models. This work establishes a framework for transport-based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures.
A Novel Quantitative Analysis Model for Information System Survivability Based on Conflict Analysis
Institute of Scientific and Technical Information of China (English)
WANG Jian; WANG Huiqiang; ZHAO Guosheng
2007-01-01
This paper describes a novel quantitative analysis model for system survivability based on conflict analysis, which provides a direct-viewing survivable situation. Based on the three-dimensional state space of conflict, each player's efficiency matrix on its credible motion set can be obtained. The player whose desire is the strongest in all initiates the moving and the overall state transition matrix of information system may be achieved. In addition, the process of modeling and stability analysis of conflict can be converted into a Markov analysis process, thus the obtained results with occurring probability of each feasible situation will help the players to quantitatively judge the probability of their pursuing situations in conflict. Compared with the existing methods which are limited to post-explanation of system's survivable situation, the proposed model is relatively suitable for quantitatively analyzing and forecasting the future development situation of system survivability. The experimental results show that the model may be effectively applied to quantitative analysis for survivability. Moreover, there will be a good application prospect in practice.
Exchange interactions and Tc in rhenium-doped silicon: DFT, DFT + U and Monte Carlo calculations.
Wierzbowska, Małgorzata
2012-03-28
Interactions between rhenium impurities in silicon are investigated by means of the density functional theory (DFT) and the DFT + U scheme. All couplings between impurities are ferromagnetic except the Re-Re dimers which in the DFT method are nonmagnetic, due to the formation of the chemical bond supported by substantial relaxation of the geometry. The critical temperature is calculated by means of classical Monte Carlo (MC) simulations with the Heisenberg Hamiltonian. The uniform ferromagnetic phase is obtained with the DFT exchange interactions at room temperature for the impurities concentration of 7%. With the DFT + U exchange interactions, the ferromagnetic clusters form above room temperature in MC samples containing only 3% Re.
Wang, Chunkao; Da, Yang
2014-01-01
The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample.
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that
Identifying systematic DFT errors in catalytic reactions
DEFF Research Database (Denmark)
Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs
2015-01-01
Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...
Understanding electrode materials of rechargeable lithium batteries via DFT calculations
Institute of Scientific and Technical Information of China (English)
Tianran Zhang; Daixin Li; Zhanliang Tao; Jun Chenn
2013-01-01
Rechargeable lithium batteries have achieved a rapid advancement and commercialization in the past decade owing to their high capacity and high power density. Different functional materials have been put forward progressively, and each possesses distinguishing structural features and electrochemical properties. In virtue of density functional theory (DFT) calculations, we can start from a specific structure to get a deep comprehension and accurate prediction of material properties and reaction mechanisms. In this paper, we review the main progresses obtained by DFT calculations in the electrode materials of rechargeable lithium batteries, aiming at a better understanding of the common electrode materials and gaining insights into the battery performance. The applications of DFT calculations involve in the following points of crystal structure modeling and stability investigations of delithiated and lithiated phases, average lithium intercalation voltage, prediction of charge distributions and band structures, and kinetic studies of lithium ion diffusion processes, which can provide atomic understanding of the capacity, reaction mechanism, rate capacity, and cycling ability. The results obtained from DFT are valuable to reveal the relationship between the structure and the properties, promoting the design of new electrode materials.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby
2017-03-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Institute of Scientific and Technical Information of China (English)
WANG Ya-lin; MA Jie; GUI Wei-hua; YANG Chun-hua; ZHANG Chuan-fu
2006-01-01
A multi-objective intelligent coordinating optimization strategy based on qualitative and quantitative synthetic model for Pb-Zn sintering blending process was proposed to obtain optimal mixture ratio. The mechanism and neural network quantitative models for predicting compositions and rule models for expert reasoning were constructed based on statistical data and empirical knowledge. An expert reasoning method based on these models were proposed to solve blending optimization problem, including multi-objective optimization for the first blending process and area optimization for the second blending process, and to determine optimal mixture ratio which will meet the requirement of intelligent coordination. The results show that the qualified rates of agglomerate Pb, Zn and S compositions are increased by 7.1%, 6.5% and 6.9%, respectively, and the fluctuation of sintering permeability is reduced by 7.0 %, which effectively stabilizes the agglomerate compositions and the permeability.
Dick, Daniel G; Maxwell, Erin E
2015-07-01
The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'.
Quantitative Verification of a Force-based Model for Pedestrian Dynamics
Chraibi, Mohcine; Schadschneider, Andreas; Mackens, Wolfgang
2009-01-01
This paper introduces a spatially continuous force-based model for simulating pedestrian dynamics. The main intention of this work is the quantitative description of pedestrian movement through bottlenecks and in corridors. Measurements of flow and density at bottlenecks will be presented and compared with empirical data. Furthermore the fundamental diagram for the movement in a corridor is reproduced. The results of the proposed model show a good agreement with empirical data.
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed
A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition
Directory of Open Access Journals (Sweden)
Amir Jamshidnezhad
2011-01-01
Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.
Energy Technology Data Exchange (ETDEWEB)
Put, R. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium); Vander Heyden, Y. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium)], E-mail: yvanvdh@vub.ac.be
2007-10-29
In the literature an increasing interest in quantitative structure-retention relationships (QSRR) can be observed. After a short introduction on QSRR and other strategies proposed to deal with the starting point selection problem prior to method development in reversed-phase liquid chromatography, a number of interesting papers is reviewed, dealing with QSRR models for reversed-phase liquid chromatography. The main focus in this review paper is put on the different modelling methodologies applied and the molecular descriptors used in the QSRR approaches. Besides two semi-quantitative approaches (i.e. principal component analysis, and decision trees), these methodologies include artificial neural networks, partial least squares, uninformative variable elimination partial least squares, stochastic gradient boosting for tree-based models, random forests, genetic algorithms, multivariate adaptive regression splines, and two-step multivariate adaptive regression splines.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Manganelli, Serena; Benfenati, Emilio; Manganaro, Alberto; Kulkarni, Sunil; Barton-Maclaren, Tara S; Honma, Masamitsu
2016-10-01
Existing Quantitative Structure-Activity Relationship (QSAR) models have limited predictive capabilities for aromatic azo compounds. In this study, 2 new models were built to predict Ames mutagenicity of this class of compounds. The first one made use of descriptors based on simplified molecular input-line entry system (SMILES), calculated with the CORAL software. The second model was based on the k-nearest neighbors algorithm. The statistical quality of the predictions from single models was satisfactory. The performance further improved when the predictions from these models were combined. The prediction results from other QSAR models for mutagenicity were also evaluated. Most of the existing models were found to be good at finding toxic compounds but resulted in many false positive predictions. The 2 new models specific for this class of compounds avoid this problem thanks to a larger set of related compounds as training set and improved algorithms.
Quantitative 3D investigation of Neuronal network in mouse spinal cord model
Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.
2017-01-01
The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.
Amadon, B
2012-02-22
An implementation of full self-consistency over the electronic density in the DFT + DMFT framework on the basis of a plane wave–projector augmented wave (PAW) DFT code is presented. It allows for an accurate calculation of the total energy in DFT + DMFT within a plane wave approach. In contrast to frameworks based on the maximally localized Wannier function, the method is easily applied to f electron systems, such as cerium, cerium oxide (Ce2O3) and plutonium oxide (Pu2O3). In order to have a correct and physical calculation of the energy terms, we find that the calculation of the self-consistent density is mandatory. The formalism is general and does not depend on the method used to solve the impurity model. Calculations are carried out within the Hubbard I approximation, which is fast to solve, and gives a good description of strongly correlated insulators. We compare the DFT + DMFT and DFT + U solutions, and underline the qualitative differences of their converged densities. We emphasize that in contrast to DFT + U, DFT + DMFT does not break the spin and orbital symmetry. As a consequence, DFT + DMFT implies, on top of a better physical description of correlated metals and insulators, a reduced occurrence of unphysical metastable solutions in correlated insulators in comparison to DFT + U.
[Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].
Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan
2005-06-01
Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.
Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems
Ndukwu, Ukachukwu
2009-01-01
This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems. Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM model equipped with reward structures. The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations. Finally, we demonstrate the novelty of the technique on a probabilistic library cas...
Directory of Open Access Journals (Sweden)
Sorana D. Bolboaca
2009-01-01
Full Text Available Quantitative structure-activity relationship (qSAR models are used to understand how the structure and activity of chemical compounds relate. In the present study, 37 carboquinone derivatives were evaluated and two different qSAR models were developed using members of the Molecular Descriptors Family (MDF and the Molecular Descriptors Family on Vertices (MDFV. The usual parameters of regression models and the following estimators were defined and calculated in order to analyze the validity and to compare the models: Akaike?s information criteria (three parameters, Schwarz (or Bayesian information criterion, Amemiya prediction criterion, Hannan-Quinn criterion, Kubinyi function, Steiger's Z test, and Akaike's weights. The MDF and MDFV models proved to have the same estimation ability of the goodness-of-fit according to Steiger's Z test. The MDFV model proved to be the best model for the considered carboquinone derivatives according to the defined information and prediction criteria, Kubinyi function, and Akaike's weights.
Orfanos, Stelios
2010-01-01
In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them
Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour
Energy Technology Data Exchange (ETDEWEB)
Fischer, H.E.; Schober, H.; Gonzalez, M.A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F.J.; Fayos, R.; Dawidowski, J. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M.A.; Vieira, S. [Universidad Autonoma de Madrid (Spain)
1997-04-01
The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.
Quantitative explanation of circuit experiments and real traffic using the optimal velocity model
Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi
2016-04-01
We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.
Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.
Rohlfs, Rori V; Nielsen, Rasmus
2015-09-01
A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence
Incorporation of caffeine into a quantitative model of fatigue and sleep.
Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A
2011-03-21
A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation.
The optimal hyperspectral quantitative models for chlorophyll-a of chlorella vulgaris
Cheng, Qian; Wu, Xiuju
2009-09-01
Chlorophyll-a of Chlorella vulgaris had been related with spectrum. Based on hyperspectral measurement for Chlorella vulgaris, the hyperspectral characteristics of Chlorella vulgaris and their optimal hyperspectral quantitative models of chlorophyll-a (Chla) estimation were researched in situ experiment. The results showed that the optimal hyperspectral quantitative model of Chlorella vulgaris was Chla=180.5+1125787(R700)'+2.4 *109[(R700)']2 (P0Chlorella vulgaris, two reflectance crests were around 540 nm and 700 nm and their locations moved right while Chl-a concentration increased. The reflectance of Chlorella vulgaris decreases with Cha concentration increase in 540 nm, but on the contrary in 700nm.
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
From classical genetics to quantitative genetics to systems biology: modeling epistasis.
Directory of Open Access Journals (Sweden)
David L Aylor
2008-03-01
Full Text Available Gene expression data has been used in lieu of phenotype in both classical and quantitative genetic settings. These two disciplines have separate approaches to measuring and interpreting epistasis, which is the interaction between alleles at different loci. We propose a framework for estimating and interpreting epistasis from a classical experiment that combines the strengths of each approach. A regression analysis step accommodates the quantitative nature of expression measurements by estimating the effect of gene deletions plus any interaction. Effects are selected by significance such that a reduced model describes each expression trait. We show how the resulting models correspond to specific hierarchical relationships between two regulator genes and a target gene. These relationships are the basic units of genetic pathways and genomic system diagrams. Our approach can be extended to analyze data from a variety of experiments, multiple loci, and multiple environments.
Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model
Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi
2017-09-01
Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-04-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.
Institute of Scientific and Technical Information of China (English)
Yoshito Hirata; Koichiro Akakura; Celestia S.Higano; Nicholas Bruchovsky; Kazuyuki Aihara
2012-01-01
If a mathematical model is to be used in the diagnosis,treatment,or prognosis of a disease,it must describe the inherent quantitative dynamics of the state.An ideal candidate disease is prostate cancer owing to the fact that it is characterized by an excellent biomarker,prostate-specific antigen (PSA),and also by a predictable response to treatment in the form of androgen suppression therapy.Despite a high initial response rate,the cancer will often relapse to a state of androgen independence which no longer responds to manipulations of the hormonal environment.In this paper,we present relevant background information and a quantitative mathematical model that potentially can be used in the optimal management of patients to cope with biochemical relapse as indicated by a rising PSA.
Seliger, Janez; Žagar, Veselko; Apih, Tomaž; Gregorovič, Alan; Latosińska, Magdalena; Olejniczak, Grzegorz Andrzej; Latosińska, Jolanta Natalia
2016-03-31
The polymorphism of anhydrous caffeine (1,3,7-trimethylxanthine; 1,3,7-trimethyl-1H-purine-2,6-(3H,7H)-dione) has been studied by (1)H-(14)N NMR-NQR (Nuclear Magnetic Resonance-Nuclear Quadrupole Resonance) double resonance and pure (14)N NQR (Nuclear Quadrupole Resonance) followed by computational modelling (Density Functional Theory, supplemented Quantum Theory of Atoms in Molecules with Reduced Density Gradient) in solid state. For two stable (phase II, form β) and metastable (phase I, form α) polymorphs the complete NQR spectra consisting of 12 lines were recorded. The assignment of signals detected in experiment to particular nitrogen sites was verified with the help of DFT. The shifts of the NQR frequencies, quadrupole coupling constants and asymmetry parameters at each nitrogen site due to polymorphic transition were evaluated. The strongest shifts were observed at N(3) site, while the smallest at N(9) site. The commercial pharmaceutical sample was found to contain approximately 20-25% of phase I and 75-80% of phase II. The orientational disorder in phase II with a local molecular arrangement mimics that in phase I. Substantial differences in the intermolecular interaction phases I and II of caffeine were analysed using computational (DFT/QTAIM/RDS) approach. The analysis of local environment of each nitrogen nucleus permitted drawing some conclusions on the topology of interactions in both polymorphs. For the most stable orientations in phase I and phase II the maps of the principal component qz of EFG tensor and its asymmetry parameter at each point of the molecular system were calculated and visualized. The relevant maps calculated for both phases I and II indicates small variation in electrostatic potential upon phase change. Small differences between packings in phases slightly disturb the neighbourhood of the N(1) and N(7) nitrogens, thus are meaningless from the biological point of view. The composition of two phases in pharmaceutical material
Igor Shuryak; Ekaterina Dadachova
2016-01-01
Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil co...
DEFF Research Database (Denmark)
Han, Xue; Sandels, Claes; Zhu, Kun;
2013-01-01
, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...... results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles....
Danielson, Steven R.; Held, Jason M.; Oo, May; Riley, Rebeccah; Gibson, Bradford W.; Andersen, Julie K.
2011-01-01
Differential cysteine oxidation within mitochondrial Complex I has been quantified in an in vivo oxidative stress model of Parkinson disease. We developed a strategy that incorporates rapid and efficient immunoaffinity purification of Complex I followed by differential alkylation and quantitative detection using sensitive mass spectrometry techniques. This method allowed us to quantify the reversible cysteine oxidation status of 34 distinct cysteine residues out of a total 130 present in muri...
Toxicity Mechanisms of the Food Contaminant Citrinin: Application of a Quantitative Yeast Model
Amparo Pascual-Ahuir; Elena Vanacloig-Pedros; Markus Proft
2014-01-01
Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifical...
Integration of CFD codes and advanced combustion models for quantitative burnout determination
Energy Technology Data Exchange (ETDEWEB)
Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)
2007-10-15
CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.
A quantitative model of human DNA base excision repair. I. Mechanistic insights.
Sokhansanj, Bahrad A; Rodrigue, Garry R; Fitch, J Patrick; Wilson, David M
2002-04-15
Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts considerably less pathway throughput than observed in experimental in vitro assays. This finding, in combination with the effects of pathway cooperativity on model throughput, supports the hypothesis of cooperation during abasic site repair and between the apurinic/apyrimidinic (AP) endonuclease, Ape1, and the 8-oxoguanine DNA glycosylase, Ogg1. The quantitative model also predicts that for 8-oxoguanine and hydrolytic AP site damage, short-patch Polbeta-mediated BER dominates, with minimal switching to the long-patch subpathway. Sensitivity analysis of the model indicates that the Polbeta-catalyzed reactions have the most control over pathway throughput, although other BER reactions contribute to pathway efficiency as well. The studies within represent a first step in a developing effort to create a predictive model for BER cellular capacity.
Zhou, Yao; Wang, Huixuan; Lin, Wenshuang; Lin, Liqin; Gao, Yixian; Yang, Feng; Du, Mingming; Fang, Weiping; Huang, Jiale; Sun, Daohua; Li, Qingbiao
2013-10-01
Lacking of quantitative experimental data and/or kinetic models that could mathematically depict the redox chemistry and the crystallization issue, bottom-to-up formation kinetics of gold nanoparticles (GNPs) remains a challenge. We measured the dynamic regime of GNPs synthesized by l-ascorbic acid (representing a chemical approach) and/or foliar aqueous extract (a biogenic approach) via in situ spectroscopic characterization and established a redox-crystallization model which allows quantitative and separate parameterization of the nucleation and growth processes. The main results were simplified as the following aspects: (I) an efficient approach, i.e., the dynamic in situ spectroscopic characterization assisted with the redox-crystallization model, was established for quantitative analysis of the overall formation kinetics of GNPs in solution; (II) formation of GNPs by the chemical and the biogenic approaches experienced a slow nucleation stage followed by a growth stage which behaved as a mixed-order reaction, and different from the chemical approach, the biogenic method involved heterogeneous nucleation; (III) also, biosynthesis of flaky GNPs was a kinetic-controlled process favored by relatively slow redox chemistry; and (IV) though GNPs formation consists of two aspects, namely the redox chemistry and the crystallization issue, the latter was the rate-determining event that controls the dynamic regime of the whole physicochemical process.
Hrobárik, Peter; Hrobáriková, Veronika; Meier, Florian; Repiský, Michal; Komorovský, Stanislav; Kaupp, Martin
2011-06-09
State-of-the-art relativistic four-component DFT-GIAO-based calculations of (1)H NMR chemical shifts of a series of 3d, 4d, and 5d transition-metal hydrides have revealed significant spin-orbit-induced heavy atom effects on the hydride shifts, in particular for several 4d and 5d complexes. The spin-orbit (SO) effects provide substantial, in some cases even the dominant, contributions to the well-known characteristic high-field hydride shifts of complexes with a partially filled d-shell, and thereby augment the Buckingham-Stephens model of off-center paramagnetic ring currents. In contrast, complexes with a 4d(10) and 5d(10) configuration exhibit large deshielding SO effects on their hydride (1)H NMR shifts. The differences between the two classes of complexes are attributed to the dominance of π-type d-orbitals for the true transition-metal systems compared to σ-type orbitals for the d(10) systems.
Directory of Open Access Journals (Sweden)
D. N. Huntzinger
2010-10-01
Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied for comparing flux estimates in light of the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that appear to have the greatest control over the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1°×1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that control the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of these factors can be linked back to model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the quantitative approach presented here provides a set of tools for comparing predicted grid-scale fluxes across
ANTIBACTERIAL ACTIVITIES, DFT AND QSAR STUDIES OF ...
African Journals Online (AJOL)
geometries have been optimized by using density functional theory (DFT) at ..... the molecular weight < 500, molar refractivity from 40-130 having hydrogen bond donor .... Cioslowski, J.; Fox, D.J. Gaussian 09, Revision A. 01, Gaussian Inc.: ...
Quantitative Regression Models for the Prediction of Chemical Properties by an Efficient Workflow.
Yin, Yongmin; Xu, Congying; Gu, Shikai; Li, Weihua; Liu, Guixia; Tang, Yun
2015-10-01
Rapid safety assessment is more and more needed for the increasing chemicals both in chemical industries and regulators around the world. The traditional experimental methods couldn't meet the current demand any more. With the development of the information technology and the growth of experimental data, in silico modeling has become a practical and rapid alternative for the assessment of chemical properties, especially for the toxicity prediction of organic chemicals. In this study, a quantitative regression workflow was built by KNIME to predict chemical properties. With this regression workflow, quantitative values of chemical properties can be obtained, which is different from the binary-classification model or multi-classification models that can only give qualitative results. To illustrate the usage of the workflow, two predictive models were constructed based on datasets of Tetrahymena pyriformis toxicity and Aqueous solubility. The qcv (2) and qtest (2) of 5-fold cross validation and external validation for both types of models were greater than 0.7, which implies that our models are robust and reliable, and the workflow is very convenient and efficient in prediction of various chemical properties. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Forward and adjoint radiance Monte Carlo models for quantitative photoacoustic imaging
Hochuli, Roman; Powell, Samuel; Arridge, Simon; Cox, Ben
2015-03-01
In quantitative photoacoustic imaging, the aim is to recover physiologically relevant tissue parameters such as chromophore concentrations or oxygen saturation. Obtaining accurate estimates is challenging due to the non-linear relationship between the concentrations and the photoacoustic images. Nonlinear least squares inversions designed to tackle this problem require a model of light transport, the most accurate of which is the radiative transfer equation. This paper presents a highly scalable Monte Carlo model of light transport that computes the radiance in 2D using a Fourier basis to discretise in angle. The model was validated against a 2D finite element model of the radiative transfer equation, and was used to compute gradients of an error functional with respect to the absorption and scattering coefficient. It was found that adjoint-based gradient calculations were much more robust to inherent Monte Carlo noise than a finite difference approach. Furthermore, the Fourier angular discretisation allowed very efficient gradient calculations as sums of Fourier coefficients. These advantages, along with the high parallelisability of Monte Carlo models, makes this approach an attractive candidate as a light model for quantitative inversion in photoacoustic imaging.
TD-DFT Study on Pyrazoline Derivatives
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The molecular structures of ground state and first single excited state for pyrazoline derivatives are optimized with DFT B3LYP method and ab initio "configuration interaction with single excitations"(CIS) method, respectively. The frontier molecular orbital characteristics have been analyzed systematically, and the electronic transition mechanism has been discussed. Electronic spectra are calculated by using TD-DFT method. These results are consistent with those from the experiment.
Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2016-01-01
Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.
Ohno, Munekazu
2012-11-01
A quantitative phase-field model is developed for simulating microstructural pattern formation in nonisothermal solidification in dilute multicomponent alloys with arbitrary thermal and solutal diffusivities. By performing the matched asymptotic analysis, it is shown that the present model with antitrapping current terms reproduces the free-boundary problem of interest in the thin-interface limit. Convergence of the simulation outcome with decreasing the interface thickness is demonstrated for nonisothermal free dendritic growth in binary alloys and isothermal and nonisothermal free dendritic growth in a ternary alloy.
PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool
AlTurki, Musab
2011-01-01
Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
Kakimoto, Tetsuhiro; Okada, Kinya; Fujitaka, Keisuke; Nishio, Masashi; Kato, Tsuyoshi; Fukunari, Atsushi; Utsumi, Hiroyuki
2015-02-01
Podocytes are an essential component of the renal glomerular filtration barrier, their injury playing an early and important role in progressive renal dysfunction. This makes quantification of podocyte marker immunoreactivity important for early detection of glomerular histopathological changes. Here we have specifically applied a state-of-the-art automated computational method of glomerulus recognition, which we have recently developed, to study quantitatively podocyte markers in a model with selective podocyte injury, namely the rat puromycin aminonucleoside (PAN) nephropathy model. We also retrospectively investigated mRNA expression levels of these markers in glomeruli which were isolated from the same formalin-fixed, paraffin-embedded kidney samples by laser microdissection. Among the examined podocyte markers, the immunopositive area and mRNA expression level of both podoplanin and synaptopodin were decreased in PAN glomeruli. The immunopositive area of podocin showed a slight decrease in PAN glomeruli, while its mRNA level showed no change. We have also identified a novel podocyte injury marker β-enolase, which was increased exclusively by podocytes in PAN glomeruli, similarly to another widely used marker, desmin. Thus, we have shown the specific application of a state-of-the-art computational method and retrospective mRNA expression analysis to quantitatively study the changes of various podocyte markers. The proposed methods will open new avenues for quantitative elucidation of renal glomerular histopathology. Copyright © 2014 Elsevier GmbH. All rights reserved.
Sensitive quantitative assays for tau and phospho-tau in transgenic mouse models
Acker, Christopher M.; Forest, Stefanie K.; Zinkowski, Ray; Davies, Peter; d’Abramo, Cristina
2012-01-01
Transgenic mouse models have been an invaluable resource in elucidating the complex roles of Aβ and tau in Alzheimer’s disease. While many laboratories rely on qualitative or semi-quantitative techniques when investigating tau pathology, we have developed four Low-Tau Sandwich ELISAs that quantitatively assess different epitopes of tau relevant to Alzheimer’s disease: total tau, pSer-202, pThr-231, pSer-396/404. In this study, after comparing our assays to commercially available ELISAs, we demonstrate our assays high specificity and quantitative capabilities using brain homogenates from tau transgenic mice, htau, JNPL3, tau KO mice. All four ELISAs show excellent specificity for mouse and human tau, with no reactivity to tau KO animals. An age dependent increase of serum tau in both tau transgenic models was also seen. Taken together, these assays are valuable methods to quantify tau and phospho-tau levels in transgenic animals, by examining tau levels in brain and measuring tau as a potential serum biomarker. PMID:22727277
González, Gloria M; Márquez, Jazmín; Treviño-Rangel, Rogelio de J; Palma-Nicolás, José P; Garza-González, Elvira; Ceceñas, Luis A; Gerardo González, J
2013-10-01
Systemic disease is the most severe clinical form of fusariosis, and the treatment involves a challenge due to the refractory response to antifungals. Treatment for murine Fusarium solani infection has been described in models that employ CFU quantitation in organs as a parameter of therapeutic efficacy. However, CFU counts do not precisely reproduce the amount of cells for filamentous fungi such as F. solani. In this study, we developed a murine model of disseminated fusariosis and compared the fungal burden with two methods: CFU and quantitative PCR. ICR and BALB/c mice received an intravenous injection of 1 × 10(7) conidia of F. solani per mouse. On days 2, 5, 7, and 9, mice from each mice strain were killed. The spleen and kidneys of each animal were removed and evaluated by qPCR and CFU determinations. Results from CFU assay indicated that the spleen and kidneys had almost the same fungal burden in both BALB/c and ICR mice during the days of the evaluation. In the qPCR assay, the spleen and kidney of each mouse strain had increased fungal burden in each determination throughout the entire experiment. The fungal load determined by the qPCR assay was significantly greater than that determined from CFU measurements of tissue. qPCR could be considered as a tool for quantitative evaluation of fungal burden in experimental disseminated F. solani infection.
Directory of Open Access Journals (Sweden)
Oral Oltulu
2007-02-01
Full Text Available This study presents Quantitative Structure Activity Relationships (QSAR studyon a pool of 18 bio-active sulfonamide compounds which includes five acetazolamidederivatives, eight sulfanilamide derivatives and five clinically used sulfonamides moleculesas drugs namely acetazolamide, methazolamide, dichlorophenamide, ethoxolamide anddorzolamide. For all the compounds, initial geometry optimizations were carried out with amolecular mechanics (MM method using the MM force fields. The lowest energyconformations of the compounds obtained by the MM method were further optimized by theDensity Functional Theory (DFT method by employing BeckeÃ¢Â€Â™s three-parameter hybridfunctional (B3LYP and 6-31G (d basis set. Molecular descriptors, dipole moment,electronegativity, total energy at 0 K, entropy at 298 K, HOMO and LUMO energiesobtained from DFT calculations provide valuable information and have a significant role inthe assessment of carbonic anhydrase (CA-II inhibitory activity of the compounds. By usingthe multiple linear regression technique several QSAR models have been drown up with thehelp these calculated descriptors and carbonic anhydrase (CA-II inhibitory data of themolecules. Among the obtained QSAR models presented in the study, statistically the mostsignificant one is a five parameters linear equation with the squared correlation coefficient R2 values of ca. 0.94 and the squared cross-validated correlation coefficient R2CV values of ca. 0.85. The results were discussed in the light of the main factors that influence theinhibitory activity of the carbonic anhydrase (CA-II isozyme.
A hierarchical statistical model for estimating population properties of quantitative genes
Directory of Open Access Journals (Sweden)
Wu Rongling
2002-06-01
Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.
Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila
2016-04-01
A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.
Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.
2015-12-01
A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.
A quantitative model to assess Social Responsibility in Environmental Science and Technology.
Valcárcel, M; Lucena, R
2014-01-01
The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article.
A training set selection strategy for a universal near-infrared quantitative model.
Jia, Yan-Hua; Liu, Xu-Ping; Feng, Yan-Chun; Hu, Chang-Qin
2011-06-01
The purpose of this article is to propose an empirical solution to the problem of how many clusters of complex samples should be selected to construct the training set for a universal near infrared quantitative model based on the Naes method. The sample spectra were hierarchically classified into clusters by Ward's algorithm and Euclidean distance. If the sample spectra were classified into two clusters, the 1/50 of the largest Heterogeneity value in the cluster with larger variation was set as the threshold to determine the total number of clusters. One sample was then randomly selected from each cluster to construct the training set, and the number of samples in training set equaled the number of clusters. In this study, 98 batches of rifampicin capsules with API contents ranging from 50.1% to 99.4% were studied with this strategy. The root mean square errors of cross validation and prediction were 2.54% and 2.31% for the model for rifampicin capsules, respectively. Then, we evaluated this model in terms of outlier diagnostics, accuracy, precision, and robustness. We also used the strategy of training set sample selection to revalidate the models for cefradine capsules, roxithromycin tablets, and erythromycin ethylsuccinate tablets, and the results were satisfactory. In conclusion, all results showed that this training set sample selection strategy assisted in the quick and accurate construction of quantitative models using near-infrared spectroscopy.
DFT Techniques in DSP Chip Core NDSP25
Institute of Scientific and Technical Information of China (English)
XUE Jing; BAI Yong-qiang; DENG Zheng-hong; ZHENG Wei
2004-01-01
Design for Testability(DFT) is critical in chip design.DFT techniques insert hardware logic to an original design,in order to improve testability of the chip,and thus reduce test cost significantly.In this paper,we introduces the most frequently used DFT techniques,then put emphasis on the DFT policy and the DFT realization of the NDSP25 chip core,and analyses the result at last.
Szaleniec, Maciej; Witko, Małgorzata; Tadeusiewicz, Ryszard; Goclon, Jakub
2006-03-01
Artificial neural networks (ANNs) are used for classification and prediction of enzymatic activity of ethylbenzene dehydrogenase from EbN1 Azoarcus sp. bacterium. Ethylbenzene dehydrogenase (EBDH) catalyzes stereo-specific oxidation of ethylbenzene and its derivates to alcohols, which find its application as building blocks in pharmaceutical industry. ANN systems are trained based on theoretical variables derived from Density Functional Theory (DFT) modeling, topological descriptors, and kinetic parameters measured with developed spectrophotometric assay. Obtained models exhibit high degree of accuracy (100% of correct classifications, correlation between predicted and experimental values of reaction rates on the 0.97 level). The applicability of ANNs is demonstrated as useful tool for the prediction of biochemical enzyme activity of new substrates basing only on quantum chemical calculations and simple structural characteristics. Multi Linear Regression and Molecular Field Analysis (MFA) are used in order to compare robustness of ANN and both classical and 3D-quantitative structure-activity relationship (QSAR) approaches.
A semi-quantitative model for risk appreciation and risk weighing
DEFF Research Database (Denmark)
Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko
2009-01-01
Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... of health impact categorization that has been successfully in force for several years within several emergency planning programs. Four health impact categories are distinguished: No-Health Impact, Low-Health Impact, Moderate-Health Impact and Severe-Health Impact. Two different charts are presented...
Modelling bacterial growth in quantitative microbiological risk assessment: is it possible?
Nauta, Maarten J
2002-03-01
Quantitative microbiological risk assessment (QMRA), predictive modelling and HACCP may be used as tools to increase food safety and can be integrated fruitfully for many purposes. However, when QMRA is applied for public health issues like the evaluation of the status of public health, existing predictive models may not be suited to model bacterial growth. In this context, precise quantification of risks is more important than in the context of food manufacturing alone. In this paper, the modular process risk model (MPRM) is briefly introduced as a QMRA modelling framework. This framework can be used to model the transmission of pathogens through any food pathway, by assigning one of six basic processes (modules) to each of the processing steps. Bacterial growth is one of these basic processes. For QMRA, models of bacterial growth need to be expressed in terms of probability, for example to predict the probability that a critical concentration is reached within a certain amount of time. In contrast, available predictive models are developed and validated to produce point estimates of population sizes and therefore do not fit with this requirement. Recent experience from a European risk assessment project is discussed to illustrate some of the problems that may arise when predictive growth models are used in QMRA. It is suggested that a new type of predictive models needs to be developed that incorporates modelling of variability and uncertainty in growth.
Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Benning, Stephen D; Kramer, Mark D
2007-11-01
Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena.
Rieger, TR; Musante, CJ
2016-01-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777
Precise Quantitative Analysis of Probabilistic Business Process Model and Notation Workflows
DEFF Research Database (Denmark)
Herbert, Luke Thomas; Sharp, Robin
2013-01-01
We present a framework for modeling and analysis of real-world business workflows. We present a formalized core subset of the business process modeling and notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations....... We present an algorithm for the translation of such models into Markov decision processes (MDP) expressed in the syntax of the PRISM model checker. This enables precise quantitative analysis of business processes for the following properties: transient and steady-state probabilities, the timing......, occurrence and ordering of events, reward-based properties, and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover...
Quantitative performance metrics for stratospheric-resolving chemistry-climate models
Directory of Open Access Journals (Sweden)
D. W. Waugh
2008-06-01
Full Text Available A set of performance metrics is applied to stratospheric-resolving chemistry-climate models (CCMs to quantify their ability to reproduce key processes relevant for stratospheric ozone. The same metrics are used to assign a quantitative measure of performance ("grade" to each model-observations comparison shown in Eyring et al. (2006. A wide range of grades is obtained, both for different diagnostics applied to a single model and for the same diagnostic applied to different models, highlighting the wide range in ability of the CCMs to simulate key processes in the stratosphere. No model scores high or low on all tests, but differences in the performance of models can be seen, especially for transport processes where several models get low grades on multiple tests. The grades are used to assign relative weights to the CCM projections of 21st century total ozone. However, only small differences are found between weighted and unweighted multi-model mean total ozone projections. This study raises several issues with the grading and weighting of CCMs that need further examination, but it does provide a framework that will enable quantification of model improvements and assignment of relative weights to the model projections.
Quantitative model of cell cycle arrest and cellular senescence in primary human fibroblasts.
Directory of Open Access Journals (Sweden)
Sascha Schäuble
Full Text Available Primary human fibroblasts in tissue culture undergo a limited number of cell divisions before entering a non-replicative "senescent" state. At early population doublings (PD, fibroblasts are proliferation-competent displaying exponential growth. During further cell passaging, an increasing number of cells become cell cycle arrested and finally senescent. This transition from proliferating to senescent cells is driven by a number of endogenous and exogenous stress factors. Here, we have developed a new quantitative model for the stepwise transition from proliferating human fibroblasts (P via reversibly cell cycle arrested (C to irreversibly arrested senescent cells (S. In this model, the transition from P to C and to S is driven by a stress function γ and a cellular stress response function F which describes the time-delayed cellular response to experimentally induced irradiation stress. The application of this model based on senescence marker quantification at the single-cell level allowed to discriminate between the cellular states P, C, and S and delivers the transition rates between the P, C and S states for different human fibroblast cell types. Model-derived quantification unexpectedly revealed significant differences in the stress response of different fibroblast cell lines. Evaluating marker specificity, we found that SA-β-Gal is a good quantitative marker for cellular senescence in WI-38 and BJ cells, however much less so in MRC-5 cells. Furthermore we found that WI-38 cells are more sensitive to stress than BJ and MRC-5 cells. Thus, the explicit separation of stress induction from the cellular stress response, and the differentiation between three cellular states P, C and S allows for the first time to quantitatively assess the response of primary human fibroblasts towards endogenous and exogenous stress during cellular ageing.
Korteland, Suze-Anne; Heimovaara, Timo
2015-03-01
Electrical resistivity tomography (ERT) is a geophysical technique that can be used to obtain three-dimensional images of the bulk electrical conductivity of the subsurface. Because the electrical conductivity is strongly related to properties of the subsurface and the flow of water it has become a valuable tool for visualization in many hydrogeological and environmental applications. In recent years, ERT is increasingly being used for quantitative characterization, which requires more detailed prior information than a conventional geophysical inversion for qualitative purposes. In addition, the careful interpretation of measurement and modelling errors is critical if ERT measurements are to be used in a quantitative way. This paper explores the quantitative determination of the electrical conductivity distribution of a cylindrical object placed in a water bath in a laboratory-scale tank. Because of the sharp conductivity contrast between the object and the water, a standard geophysical inversion using a smoothness constraint could not reproduce this target accurately. Better results were obtained by using the ERT measurements to constrain a model describing the geometry of the system. The posterior probability distributions of the parameters describing the geometry were estimated with the Markov chain Monte Carlo method DREAM(ZS). Using the ERT measurements this way, accurate estimates of the parameters could be obtained. The information quality of the measurements was assessed by a detailed analysis of the errors. Even for the uncomplicated laboratory setup used in this paper, errors in the modelling of the shape and position of the electrodes and the shape of the domain could be identified. The results indicate that the ERT measurements have a high information content which can be accessed by the inclusion of prior information and the consideration of measurement and modelling errors.
Directory of Open Access Journals (Sweden)
Zehui Wu
2017-01-01
Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.
Flow assignment model for quantitative analysis of diverting bulk freight from road to railway.
Liu, Chang; Lin, Boliang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian
2017-01-01
Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway.
Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J
2009-12-15
We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Research on quantitative models of suspended sediment concentration (SSC) using remote sensing technology is very important to understand the scouring and siltation variation in harbors and water channels. Based on laboratory study of the relationship between different suspended sediment concentrations and reflectance spectra measured synchronously, quantitative inversion models of SSC based on single factor, band ratio and sediment parameter were developed, which provides an effective method to retrieve the SSC from satellite images. Results show that the b1 (430-500nm) and b3 (670-735nm) are the optimal wavelengths for the estimation of lower SSC and the b4 (780-835nm) is the optimal wavelength to estimate the higher SSC. Furthermore the band ratio B2/B3 can be used to simulate the variation of lower SSC better and the B4/B1 to estimate the higher SSC accurately. Also the inversion models developed by sediment parameters of higher and lower SSCs can get a relatively higher accuracy than the single factor and band ratio models.
Directory of Open Access Journals (Sweden)
Zhenpo Wang
2013-01-01
Full Text Available In order to adapt the matching and planning requirements of charging station in the electric vehicle (EV marketization application, with related layout theories of the gas stations, a location model of charging stations is established based on electricity consumption along the roads among cities. And a quantitative model of charging stations is presented based on the conversion of oil sales in a certain area. Both are combining the principle based on energy consuming equivalence substitution in process of replacing traditional vehicles with EVs. Defined data are adopted in the example analysis of two numerical case models and analyze the influence on charging station layout and quantity from the factors like the proportion of vehicle types and the EV energy consumption at the same time. The results show that the quantitative model of charging stations is reasonable and feasible. The number of EVs and the energy consumption of EVs bring more significant impact on the number of charging stations than that of vehicle type proportion, which provides a basis for decision making for charging stations construction layout in reality.
Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.
2011-05-01
Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.
Jesser, Anton; Rohrmüller, Martin; Schmidt, Wolf Gero; Herres-Pawlis, Sonja
2014-01-05
We report a comprehensive computational benchmarking of the structural and optical properties of a bis(chelate) copper(I) guanidine-quinoline complex. Using various (TD-)DFT flavors a strong influence of the basis set is found. Moreover, the amount of exact exchange shifts metal-to-ligand bands by 1 eV through the absorption spectrum. The BP86/6-311G(d) and B3LYP/def2-TZVP functional/basis set combinations were found to yield results in best agreement with the experimental data. In order to probe the general applicability of TD-DFT to excitations of copper bis(chelate) charge-transfer (CT) systems, we studied a small model system that on the one hand is accessible to methods of many-body perturbation theory (MBPT) but still contains simple guanidine and imine groups. These calculations show that large quasiparticle energies of the order of several electronvolts are largely offset by exciton binding energies for optical excitations and that TD-DFT excitation energies deviate from MBPT results by at most 0.5 eV, further corroborating the reliability of our TD-DFT results. The latter result in a multitude of MLCT bands ranging from the visible region at 3.4 eV into the UV at 5.5 eV for the bis(chelate) complex. Molecular orbital analysis provided insight into the CT within these systems but gave mixed transitions. A meaningful transition assignment is possible, however, by using natural transition orbitals. Additionally, we performed a thorough conformational analysis as the correct description of the copper coordination is crucial for the prediction of optical spectra. We found that DFT identifies the correct conformational minimum and that the MLCTs are strongly dependent on the torsion of the chelate angles at the copper center. From the results, it is concluded that extensive benchmarking allows for the quantitative analyses of the CT behavior of copper bis(chelate) complexes within TD-DFT.
Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study
DEFF Research Database (Denmark)
Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming
2014-01-01
Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...
A Quantitative Model of Keyhole Instability Induced Porosity in Laser Welding of Titanium Alloy
Pang, Shengyong; Chen, Weidong; Wang, Wen
2014-06-01
Quantitative prediction of the porosity defects in deep penetration laser welding has generally been considered as a very challenging task. In this study, a quantitative model of porosity defects induced by keyhole instability in partial penetration CO2 laser welding of a titanium alloy is proposed. The three-dimensional keyhole instability, weld pool dynamics, and pore formation are determined by direct numerical simulation, and the results are compared to prior experimental results. It is shown that the simulated keyhole depth fluctuations could represent the variation trends in the number and average size of pores for the studied process conditions. Moreover, it is found that it is possible to use the predicted keyhole depth fluctuations as a quantitative measure of the average size of porosity. The results also suggest that due to the shadowing effect of keyhole wall humps, the rapid cooling of the surface of the keyhole tip before keyhole collapse could lead to a substantial decrease in vapor pressure inside the keyhole tip, which is suggested to be the mechanism by which shielding gas enters into the porosity.
Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W
2011-11-01
Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.
Indian Academy of Sciences (India)
Sandeep Kaur-Ghumaan; A Sreenithya; Raghavan B Sunoj
2015-03-01
The reaction of [Fe2(CO)6(-toluene-3, 4-benzenedithiolate)] 1 and bidentate diphosphine, 1, 1′-bis(diphenylphosphino)ferrocene (dppf) has been studied. New complexes obtained have been characterized by various spectroscopic techniques as bioinspired models of the iron hydrogenase active site. The crystal structure of [Fe2(CO)5(1-dppfO)(-toluene-3, 4-benzenedithiolate)] 4 is reported.
The structure of N2 adsorbed on the rumpled NaCl(100) surface--a combined LEED and DFT-D study.
Vogt, Jochen
2012-11-07
The structure of N(2) physisorbed on the NaCl(100) single crystal surface is investigated by means of quantitative low-energy electron diffraction (LEED) in combination with dispersion corrected density functional theory (DFT-D). In the temperature range between 20 K and 45 K, a p(1 × 1) structure is observed in the LEED experiment. According to the structure analysis based on the measured diffraction spot intensity profiles, the N(2) molecules are adsorbed over the topmost Na(+) ions. The experimental distance of the lower nitrogen to the Na(+) ion underneath is (2.55 ± 0.07) Å; the corresponding DFT-D value is 2.65 Å. The axes of the molecules are tilted (26 ± 3)° with respect to the surface normal, while in the zero Kelvin optimum structure from DFT-D, the molecules have a perpendicular orientation. The experimental monolayer heat of adsorption, deduced from a Fowler-Guggenheim kinetic model of adsorption is -(13.6 ± 1.6) kJ mol(-1), including a lateral molecule-molecule interaction energy of -(2.0 ± 0.4) kJ mol(-1). The zero Kelvin adsorption energy from DFT-D, including zero point energy correction, is -15.6 kJ mol(-1); the molecule-molecule interaction is -2.4 kJ mol(-1). While the rumpling of the NaCl(100) surface is unchanged upon adsorption of nitrogen, the best-fit root mean square thermal displacements of the ions in the topmost substrate layer are significantly reduced.
Quantitative model for the generic 3D shape of ICMEs at 1 AU
Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.
2016-10-01
Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.
Andrés, Juan; Moliner, Vicente; Safont, Vicent S.; Domingo, Luís R.; Picher, María T.
1996-11-01
As a model of the chemical reactions that take place in the active site of gluthatione reductase, the nature of the molecular mechanism for the hydride transfer step has been characterized by means of accurate quantum chemical characterizations of transition structures. The calculations have been carried out with analytical gradients at AM1 and PM3 semiempirical procedures, ab initio at HF level with 3-21G, 4-31G, 6-31G, and 6-31G basis sets and BP86 and BLYP as density functional methods. The results of this study suggest that the endo relative orientation on the substrate imposed by the active site is optimal in polarizing the C4-Ht bond and situating the system in the neighborhood of the quadratic region of the transition structure associated to the hydride transfer step on potential energy surface. The endo arrangement of the transition structure results in optimal frontier HOMO orbital interaction between NADH and FAD partners. The geometries of the transition structures and the corresponding transition vectors, that contain the fundamental information relating reactive fluctuation patterns, are model independent and weakly dependent on the level of theory used to determine them. A comparison between simple and complex molecular models shows that there is a minimal set of coordinates describing the essentials of hydride transfer step. The analysis of transition vector components suggests that the primary and secondary kinetic isotope effects can be strongly coupled, and this prompted the calculation of deuterium and tritium primary, secondary, and primary and secondary kinetic isotope effects. The results obtained agree well with experimental data and demonstrate this coupling.
BH-DFTB/DFT calculations for iron clusters
Directory of Open Access Journals (Sweden)
Abdurrahman Aktürk
2016-05-01
Full Text Available We present a study on the structural, electronic, and magnetic properties of Fen(n = 2 − 20 clusters by performing density functional tight binding (DFTB calculations within a basin hopping (BH global optimization search followed by density functional theory (DFT investigations. The structures, total energies and total spin magnetic moments are calculated and compared with previously reported theoretical and experimental results. Two basis sets SDD with ECP and 6-31G** are employed in the DFT calculations together with BLYP GGA exchange-correlation functional. The results indicate that the offered BH-DFTB/DFT strategy collects all the global minima of which different minima have been reported in the previous studies by different groups. Small Fe clusters have three kinds of packing; icosahedral (Fe9−13, centered hexagonal antiprism (Fe14−17, Fe20, and truncated decahedral (Fe17(2, Fe18−19. It is obtained in a qualitative agreement with the time of flight mass spectra that the magic numbers for the small Fe clusters are 7, 13, 15, and 19 and with the collision induced dissociation experiments that the sizes 6, 7, 13, 15, and 19 are thermodynamically more stable than their neighboring sizes. The spin magnetic moment per atom of Fen(n = 2 − 20 clusters is between 2.4 and 3.6 μB for the most of the sizes. The antiferromagnetic coupling between the central and the surface atoms of the Fe13 icosahedron, which have already been reported by experimental and theoretical studies, is verified by our calculations as well. The quantitative disagreements between the calculations and measurements of the magnetic moments of the individual sizes are still to be resolved.
BH-DFTB/DFT calculations for iron clusters
Aktürk, Abdurrahman; Sebetci, Ali
2016-05-01
We present a study on the structural, electronic, and magnetic properties of Fen(n = 2 - 20) clusters by performing density functional tight binding (DFTB) calculations within a basin hopping (BH) global optimization search followed by density functional theory (DFT) investigations. The structures, total energies and total spin magnetic moments are calculated and compared with previously reported theoretical and experimental results. Two basis sets SDD with ECP and 6-31G** are employed in the DFT calculations together with BLYP GGA exchange-correlation functional. The results indicate that the offered BH-DFTB/DFT strategy collects all the global minima of which different minima have been reported in the previous studies by different groups. Small Fe clusters have three kinds of packing; icosahedral (Fe9-13), centered hexagonal antiprism (Fe14-17, Fe20), and truncated decahedral (Fe17(2), Fe18-19). It is obtained in a qualitative agreement with the time of flight mass spectra that the magic numbers for the small Fe clusters are 7, 13, 15, and 19 and with the collision induced dissociation experiments that the sizes 6, 7, 13, 15, and 19 are thermodynamically more stable than their neighboring sizes. The spin magnetic moment per atom of Fen(n = 2 - 20) clusters is between 2.4 and 3.6 μB for the most of the sizes. The antiferromagnetic coupling between the central and the surface atoms of the Fe13 icosahedron, which have already been reported by experimental and theoretical studies, is verified by our calculations as well. The quantitative disagreements between the calculations and measurements of the magnetic moments of the individual sizes are still to be resolved.
DEFF Research Database (Denmark)
Rørbech, Jakob Thaysen; Vadenbo, Carl; Hellweg, Stefanie
2014-01-01
Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment...... results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247...... groups, according to method focus and modeling approach, to aid method selection within LCA....
Multi-factor models and signal processing techniques application to quantitative finance
Darolles, Serges; Jay, Emmanuelle
2013-01-01
With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere
An equivalent magnetic dipoles model for quantitative damage recognition of broken wire
Institute of Scientific and Technical Information of China (English)
TAN Ji-wen; ZHAN Wei-xia; LI Chun-jing; WEN Yan; SHU Jie
2005-01-01
By simplifying saturatedly magnetized wire-rope to magnetic dipoles of the same magnetic field strength, an equivalent magnetic dipoles model is developed and the measuring principle for recognising damage of broken wire was presented. The relevant calculation formulas were also deduced. A composite solution method about nonlinear optimization was given. An example was given to illustrate the use of the equivalent magnetic dipoles method for quantitative damage recognition, and demonstrates that the result of this method is consistent with the real situation, so the method is valid and practical.
Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles
Energy Technology Data Exchange (ETDEWEB)
Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung
2000-07-01
This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.
Directory of Open Access Journals (Sweden)
Qian Wang
2016-01-01
Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.
Sun, Lidan; Sang, Mengmeng; Zheng, Chenfei; Wang, Dongyang; Shi, Hexin; Liu, Kaiyue; Guo, Yanfang; Cheng, Tangren; Zhang, Qixiang; Wu, Rongling
2017-05-30
Heterochrony is known as a developmental change in the timing or rate of ontogenetic events across phylogenetic lineages. It is a key concept synthesizing development into ecology and evolution to explore the mechanisms of how developmental processes impact on phenotypic novelties. A number of molecular experiments using contrasting organisms in developmental timing have identified specific genes involved in heterochronic variation. Beyond these classic approaches that can only identify single genes or pathways, quantitative models derived from current next-generation sequencing data serve as a more powerful tool to precisely capture heterochronic variation and systematically map a complete set of genes that contribute to heterochronic processes. In this opinion note, we discuss a computational framework of genetic mapping that can characterize heterochronic quantitative trait loci that determine the pattern and process of development. We propose a unifying model that charts the genetic architecture of heterochrony that perceives and responds to environmental perturbations and evolves over geologic time. The new model may potentially enhance our understanding of the adaptive value of heterochrony and its evolutionary origins, providing a useful context for designing new organisms that can best use future resources. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.
Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao
2015-08-01
Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies.
Meyer, Karin
2007-11-01
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).
Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters
Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad
2017-02-01
The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.
Varbanov, Hristo P; Jakupec, Michael A; Roller, Alexander; Jensen, Frank; Galanski, Markus; Keppler, Bernhard K
2013-01-10
Octahedral platinum(IV) complexes are promising candidates in the fight against cancer. In order to rationalize the further development of this class of compounds, detailed studies on their mechanisms of action, toxicity, and resistance must be provided and structure-activity relationships must be drawn. Herein, we report on theoretical and QSAR investigations of a series of 53 novel bis-, tris-, and tetrakis(carboxylato)platinum(IV) complexes, synthesized and tested for cytotoxicity in our laboratories. The hybrid DFT functional wb97x was used for optimization of the structure geometry and calculation of the descriptors. Reliable and robust QSAR models with good explanatory and predictive properties were obtained for both the cisplatin sensitive cell line CH1 and the intrinsically cisplatin resistant cell line SW480, with a set of four descriptors.
2012-01-01
Octahedral platinum(IV) complexes are promising candidates in the fight against cancer. In order to rationalize the further development of this class of compounds, detailed studies on their mechanisms of action, toxicity, and resistance must be provided and structure–activity relationships must be drawn. Herein, we report on theoretical and QSAR investigations of a series of 53 novel bis-, tris-, and tetrakis(carboxylato)platinum(IV) complexes, synthesized and tested for cytotoxicity in our laboratories. The hybrid DFT functional wb97x was used for optimization of the structure geometry and calculation of the descriptors. Reliable and robust QSAR models with good explanatory and predictive properties were obtained for both the cisplatin sensitive cell line CH1 and the intrinsically cisplatin resistant cell line SW480, with a set of four descriptors. PMID:23214999
Directory of Open Access Journals (Sweden)
Tamara Bruna-Larenas
2012-01-01
Full Text Available We report the results of a search for model-based relationships between mu, delta, and kappa opioid receptor binding affinity and molecular structure for a group of molecules having in common a morphine structural core. The wave functions and local reactivity indices were obtained at the ZINDO/1 and B3LYP/6-31 levels of theory for comparison. New developments in the expression for the drug-receptor interaction energy expression allowed several local atomic reactivity indices to be included, such as local electronic chemical potential, local hardness, and local electrophilicity. These indices, together with a new proposal for the ordering of the independent variables, were incorporated in the statistical study. We found and discussed several statistically significant relationships for mu, delta, and kappa opioid receptor binding affinity at both levels of theory. Some of the new local reactivity indices incorporated in the theory appear in several equations for the first time in the history of model-based equations. Interaction pharmacophores were generated for mu, delta, and kappa receptors. We discuss possible differences regulating binding and selectivity in opioid receptor subtypes. This study, contrarily to the statistically backed ones, is able to provide a microscopic insight of the mechanisms involved in the binding process.
Shulkind, Gal; Nazarathy, Moshe
2012-11-05
DFT-spread (DFT-S) coherent optical OFDM was numerically and experimentally shown to provide improved nonlinear tolerance over an optically amplified dispersion uncompensated fiber link, relative to both conventional coherent OFDM and single-carrier transmission. Here we provide an analytic model rigorously accounting for this numerical result and precisely predicting the optimal bandwidth per DFT-S sub-band (or equivalently the optimal number of sub-bands per optical channel) required in order to maximize the link non-linear tolerance (NLT). The NLT advantage of DFT-S OFDM is traced to the particular statistical dependency introduced among the OFDM sub-carriers by means of the DFT spreading operation. We further extend DFT-S to a unitary-spread generalized modulation format which includes as special cases the DFT-S scheme as well as a new format which we refer to as wavelet-spread (WAV-S) OFDM, replacing the spreading DFTs by Hadamard matrices which have elements +/-1 hence are multiplier-free. The extra complexity incurred in the spreading operation is almost negligible, however the performance improvement with WAV-S relative to plain OFDM is more modest than that achieved by DFT-S, which remains the preferred format for nonlinear tolerance improvement, outperforming both plain OFDM and single-carrier schemes.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
It is difficult to identify the source(s) of mixed oils from multiple source rocks, and in particular the relative contribution of each source rock. Artificial mixing experiments using typical crude oils and ratios of different biomarkers show that the relative contribution changes are non-linear when two oils with different concentrations of biomarkers mix with each other. This may result in an incorrect conclusion if ratios of biomarkers and a simple binary linear equation are used to calculate the contribution proportion of each end-member to the mixed oil. The changes of biomarker ratios with the mixing proportion of end-member oils in the trinal mixing model are more complex than in the binary mixing model. When four or more oils mix, the contribution proportion of each end-member oil to the mixed oil cannot be calculated using biomarker ratios and a simple formula. Artificial mixing experiments on typical oils reveal that the absolute concentrations of biomarkers in the mixed oil cause a linear change with mixing proportion of each end-member. Mathematical inferences verify such linear changes. Some of the mathematical calculation methods using the absolute concentrations or ratios of biomarkers to quantitatively determine the proportion of each end-member in the mixed oils are deduced from the results of artificial experiments and by theoretical inference. Ratio of two biomarker compounds changes as a hyperbola with the mixing proportion in the binary mixing model,as a hyperboloid in the trinal mixing model, and as a hypersurface when mixing more than three endmembers. The mixing proportion of each end-member can be quantitatively determined with these mathematical models, using the absolute concentrations and the ratios of biomarkers. The mathematical calculation model is more economical, convenient, accurate and reliable than conventional artificial mixing methods.
Mattsson, T. R.; Shulenburger, L.; Root, S.; Cochrane, K. R.
2011-03-01
Quantitative knowledge of the thermo-physical properties of CO2 at high pressure is required to confidently model the structure of gas-giants like Neptune and Uranus and the deep carbon cycle of the earth. DFT based molecular dynamics has been established as a method capable of yielding high fidelity results for many materials, including shocked gases, at high pressure and temperature. We predict the principal Hugoniot for liquid CO2 up to 500GPa. Our simulations also show that the plateau in shock pressure identified by Nellis and co-workers is the result of dissociation. At low temperatures we validate the DFT results by comparing with diffusion Monte Carlo calculations. This allows for a more accurate determination of the initial conditions for the shock experiments. We also describe the design of upcoming flyer-plate experiments on the Z-machine aimed at providing high-precision shock compression data for CO2 between 150 and 600 GPa. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corp. for the US Dept. of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Beridze, George; Kowalski, Piotr M
2014-12-18
Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.
Wilson, J. P.; Fischer, W. W.
2010-12-01
Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative
Göltl, Florian; Sautet, Philippe
2014-04-01
The inclusion of non-local interactions is one of the large challenges in density functional theory. Very promising methods are the vdW-DF2 and BEEF-vdW functionals, which combine a semi-local approximation for exchange interactions and a non-local correlation expression. In this work we apply those functionals to model the adsorption of short alkanes in the zeolite SSZ-13. Even though results for energetics are improved with respect to other vdW-DF based methods, we still find a comparatively large error compared to high-level calculations. These errors result from approximations in the determination of the dielectric function and of the van der Waals kernel. The insights presented in this work will help to understand the performance not only of vdW-DF2 and BEEF-vdW, but all vdW-DF based functionals in various chemically or physically important systems.
The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...
Bani-Yaseen, Abdulilah Dawoud; Al-Balawi, Mona
2014-08-07
The solvatochromic, spectral, and geometrical properties of nifenazone (NIF), a pyrazole-nicotinamide drug, were experimentally and computationally investigated in several neat solvents and in hydro-organic binary systems such as water-acetonitrile and water-dioxane systems. The bathochromic spectral shift observed in NIF absorption spectra when reducing the polarity of the solvent was correlated with the orientation polarizability (Δf). Unlike aprotic solvents, a satisfactory correlation between λ(max) and Δf was determined (linear correlation of regression coefficient, R, equal to 0.93) for polar protic solvents. In addition, the medium-dependent spectral properties were correlated with the Kamlet-Taft solvatochromic parameters (α, β, and π*) by applying a multiple linear regression analysis (MLRA). The results obtained from this analysis were then employed to establish MLRA relationships for NIF in order to estimate the spectral shift in different solvents, which in turn exhibited excellent correlation (R > 0.99) with the experimental values of ν(max). Density functional theory (DFT) and time-dependent DFT theory calculations coupled with the integral equation formalism-polarizable continuum model (IEF-PCM) were performed to investigate the solvent-dependent spectral and geometrical properties of NIF. The calculations showed good and poor agreements with the experimental results using the CAM-B3LYP and B3LYP functionals, respectively. Experimental and theoretical results confirmed that the chemical properties of NIF are strongly dependent on the polarity of the chosen medium and its hydrogen bonding capability. This in turn supports the hypothesis of the delocalization of the electron density within the pyrazole ring of NIF.
Modeling Morphogenesis in silico and in vitro: Towards Quantitative, Predictive, Cell-based Modeling
R.M.H. Merks (Roeland); P. Koolwijk
2009-01-01
htmlabstractCell-based, mathematical models help make sense of morphogenesis—i.e. cells organizing into shape and pattern—by capturing cell behavior in simple, purely descriptive models. Cell-based models then predict the tissue-level patterns the cells produce collectively. The ﬁrst
Quantitative computational models of molecular self-assembly in systems biology
Thomas, Marcus; Schwartz, Russell
2017-06-01
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.
Quantitative Validation of a Human Body Finite Element Model Using Rigid Body Impacts.
Vavalle, Nicholas A; Davis, Matthew L; Stitzel, Joel D; Gayzik, F Scott
2015-09-01
Validation is a critical step in finite element model (FEM) development. This study focuses on the validation of the Global Human Body Models Consortium full body average male occupant FEM in five localized loading regimes-a chest impact, a shoulder impact, a thoracoabdominal impact, an abdominal impact, and a pelvic impact. Force and deflection outputs from the model were compared to experimental traces and corridors scaled to the 50th percentile male. Predicted fractures and injury severity measures were compared to evaluate the model's injury prediction capabilities. The methods of ISO/TS 18571 were used to quantitatively assess the fit of model outputs to experimental force and deflection traces. The model produced peak chest, shoulder, thoracoabdominal, abdominal, and pelvis forces of 4.8, 3.3, 4.5, 5.1, and 13.0 kN compared to 4.3, 3.2, 4.0, 4.0, and 10.3 kN in the experiments, respectively. The model predicted rib and pelvic fractures related to Abbreviated Injury Scale scores within the ranges found experimentally all cases except the abdominal impact. ISO/TS 18571 scores for the impacts studied had a mean score of 0.73 with a range of 0.57-0.83. Well-validated FEMs are important tools used by engineers in advancing occupant safety.
Institute of Scientific and Technical Information of China (English)
QIN Hong; CHEN JingWen; WANG Ying; WANG Bin; LI XueHua; LI Fei; WANG YaNan
2009-01-01
Bioconcentration factors (BCFs) are of great importance for ecological risk assessment of organic chemicals. In this study, a quantitative structure-activity relationship (QSAR) model for fish BCFs of 8 groups of compounds was developed employing partial least squares (PLS) regression, based on lin-ear solvation energy relationship (LSER) theory and theoretical molecular structural descriptors. The guidelines for development and validation of QSAR models proposed by the Organization for Economic Co-operation and Development (OECD) were followed. The model results show that the main factors governing IogBCF are Connolly molecular area (CMA), average molecular polarizability (α) and mo-lecular weight (Mw). Thus molecular size plays a critical role in affecting the bioconcentration of organic pollutants in fish. For the established model, the multiple correlation coefficient square (R2Y)2=0.868, the root mean square error (RMSE)=0.553 log units, and the leave-many-out cross-validated Q2CUM=0.860, indicating its good goodness-of-fit and robustness. The model predictivity was evaluated by external validation, with the external explained variance (Q2EXT)=0.755 and RMSE=0.647 log units. Moreover, the applicability domain of the developed model was assessed and visualized by the Williams plot. The developed QSAR model can be used to predict fish logBCF for organic chemicals within the application domain.
Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo
2017-03-21
Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into
Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo
2017-03-01
Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into
Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge
Qiu, Xiangdong
2013-01-01
Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.
DEFF Research Database (Denmark)
van de Streek, Jacco; Neumann, Marcus A
2014-01-01
is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom.......In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published...... minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom...
Antibacterial activities, DFT and QSAR studies of quinazolinone compounds
Directory of Open Access Journals (Sweden)
A. G. Al-Sehemi
2016-08-01
Full Text Available The quinazolinone compounds (1 and 2 in this work were examined for their in vitro antibacterial activities against gram-positive (Staphylococcus aureus and gram-negative bacteria (Klebsiella pneumonia, Proteus bacilli and Shigella flexneri. Compared to the reference antibiotic chloramphenicol, these compounds showed high antibacterial activities against studied strains with inhibition zones observation. The ground state geometries have been optimized by using density functional theory (DFT at B3LYP/6-31G* level of theory. The absorption spectra have been calculated by using time dependent density functional theory (TDDFT with and without solvent. The effect of different functionals (B3LYP, MPW1PW91, and PBE1PBE on the absorption wavelengths has been studied. The ionization potential (IP, electron affinity (EA, energy gap (Egap, electronegativity (χ, hardness (η, electrophilicity (ω, softness (S and electrophilicity index (ωi were computed and discussed. The nonlinear optical (NLO properties vary by changing the theory (DFT to HF or functional (B3LYP to CAM-B3LYP. The physicochemical parameters have been studied by quantitative structure–activity relationship (QSAR. The computed properties of investigated compounds have been compared with the Chloramphenicol as well as available experimental data.
Geerts, Hugo; Roberts, Patrick; Spiros, Athan; Potkin, Steven
2015-04-01
The concept of targeted therapies remains a holy grail for the pharmaceutical drug industry for identifying responder populations or new drug targets. Here we provide quantitative systems pharmacology as an alternative to the more traditional approach of retrospective responder pharmacogenomics analysis and applied this to the case of iloperidone in schizophrenia. This approach implements the actual neurophysiological effect of genotypes in a computer-based biophysically realistic model of human neuronal circuits, is parameterized with human imaging and pathology, and is calibrated by clinical data. We keep the drug pharmacology constant, but allowed the biological model coupling values to fluctuate in a restricted range around their calibrated values, thereby simulating random genetic mutations and representing variability in patient response. Using hypothesis-free Design of Experiments methods the dopamine D4 R-AMPA (receptor-alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptor coupling in cortical neurons was found to drive the beneficial effect of iloperidone, likely corresponding to the rs2513265 upstream of the GRIA4 gene identified in a traditional pharmacogenomics analysis. The serotonin 5-HT3 receptor-mediated effect on interneuron gamma-aminobutyric acid conductance was identified as the process that moderately drove the differentiation of iloperidone versus ziprasidone. This paper suggests that reverse-engineered quantitative systems pharmacology is a powerful alternative tool to characterize the underlying neurobiology of a responder population and possibly identifying new targets. © The Author(s) 2015.
Directory of Open Access Journals (Sweden)
Franceschini Barbara
2005-02-01
Full Text Available Abstract Background Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. Methods This paper introduces the surface fractal dimension (Ds as a numerical index of the two-dimensional (2-D geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. Results We show that Ds significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Conclusions Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth.
Mustonen, Ville; Kinney, Justin; Callan, Curtis G; Lässig, Michael
2008-08-26
We present a genomewide cross-species analysis of regulation for broad-acting transcription factors in yeast. Our model for binding site evolution is founded on biophysics: the binding energy between transcription factor and site is a quantitative phenotype of regulatory function, and selection is given by a fitness landscape that depends on this phenotype. The model quantifies conservation, as well as loss and gain, of functional binding sites in a coherent way. Its predictions are supported by direct cross-species comparison between four yeast species. We find ubiquitous compensatory mutations within functional sites, such that the energy phenotype and the function of a site evolve in a significantly more constrained way than does its sequence. We also find evidence for substantial evolution of regulatory function involving point mutations as well as sequence insertions and deletions within binding sites. Genes lose their regulatory link to a given transcription factor at a rate similar to the neutral point mutation rate, from which we infer a moderate average fitness advantage of functional over nonfunctional sites. In a wider context, this study provides an example of inference of selection acting on a quantitative molecular trait.
Quantitative assessment of changes in landslide risk using a regional scale run-out model
Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone
2015-04-01
The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors
Energy Technology Data Exchange (ETDEWEB)
Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.
1998-12-01
The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.
Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm
Directory of Open Access Journals (Sweden)
Benedek Kovács
2006-01-01
Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.
Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L
2007-06-01
We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.
An integrated qualitative and quantitative modeling framework for computer‐assisted HAZOP studies
DEFF Research Database (Denmark)
Wu, Jing; Zhang, Laibin; Hu, Jinqiu
2014-01-01
and validated on a case study concerning a three‐phase separation process. The multilevel flow modeling (MFM) methodology is used to represent the plant goals and functions. First, means‐end analysis is used to identify and formulate the intention of the process design in terms of components, functions...... safety critical operations, its causes and consequences. The outcome is a qualitative hazard analysis of selected process deviations from normal operations and their consequences as input to a traditional HAZOP table. The list of unacceptable high risk deviations identified by the qualitative HAZOP...... analysis is used as input for rigorous analysis and evaluation by the quantitative analysis part of the framework. To this end, dynamic first‐principles modeling is used to simulate the system behavior and thereby complement the results of the qualitative analysis part. The practical framework for computer...
Stockley, E W; Cole, H M; Brown, A D; Wheal, H V
1993-04-01
A system for accurately reconstructing neurones from optical sections taken at high magnification is described. Cells are digitised on a 68000-based microcomputer to form a database consisting of a series of linked nodes each consisting of x, y, z coordinates and an estimate of dendritic diameter. This database is used to generate three-dimensional (3-D) displays of the neurone and allows quantitative analysis of the cell volume, surface area and dendritic length. Images of the cell can be manipulated locally or transferred to an IBM 3090 mainframe where a wireframe model can be displayed on an IBM 5080 graphics terminal and rotated interactively in real time, allowing visualisation of the cell from all angles. Space-filling models can also be produced. Reconstructions can also provide morphological data for passive electrical simulations of hippocampal pyramidal cells.
Formal modeling and quantitative evaluation for information system survivability based on PEPA
Institute of Scientific and Technical Information of China (English)
WANG Jian; WANG Hui-qiang; ZHAO Guo-sheng
2008-01-01
Survivability should be considered beyond security for information system. To assess system survivability accurately, for improvement, a formal modeling and analysis method based on stochastic process algebra is proposed in this article. By abstracting the interactive behaviors between intruders and information system, a transferring graph of system state oriented survivability is constructed. On that basis, parameters are defined and system behaviors are characterized precisely with performance evaluation process algebra (PEPA), simultaneously considering the influence of different attack modes. Ultimately the formal model for survivability is established and quantitative analysis results are obtained by PEPA Workbench tool. Simulation experiments show the effectiveness and feasibility of the developed method, and it can help to direct the designation of survivable system.
Quantitative model for the generic 3D shape of ICMEs at 1 AU
Démoulin, P; Masías-Meza, J J; Dasso, S
2016-01-01
Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs) while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, so along a 1D cut. As such, the data only give a partial view of their 3D structures. By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. In a first approach we theoretically obtain the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compare their compatibility with observed distributions. In a second approach we use the shock normal and the flux rope axis orientations, as well as the impact parameter, to provide statistical information across the spacecraft trajectory. The study of different 3D shock models shows that the observations are compatible with a ...
Model exploration and analysis for quantitative safety refinement in probabilistic B
Ndukwu, Ukachukwu; 10.4204/EPTCS.55.7
2011-01-01
The role played by counterexamples in standard system analysis is well known; but less common is a notion of counterexample in probabilistic systems refinement. In this paper we extend previous work using counterexamples to inductive invariant properties of probabilistic systems, demonstrating how they can be used to extend the technique of bounded model checking-style analysis for the refinement of quantitative safety specifications in the probabilistic B language. In particular, we show how the method can be adapted to cope with refinements incorporating probabilistic loops. Finally, we demonstrate the technique on pB models summarising a one-step refinement of a randomised algorithm for finding the minimum cut of undirected graphs, and that for the dependability analysis of a controller design.
Hasegawa, K; Funatsu, K
2000-01-01
Quantitative structure-activity relationship (QSAR) studies based on chemometric techniques are reviewed. Partial least squares (PLS) is introduced as a novel robust method to replace classical methods such as multiple linear regression (MLR). Advantages of PLS compared to MLR are illustrated with typical applications. Genetic algorithm (GA) is a novel optimization technique which can be used as a search engine in variable selection. A novel hybrid approach comprising GA and PLS for variable selection developed in our group (GAPLS) is described. The more advanced method for comparative molecular field analysis (CoMFA) modeling called GA-based region selection (GARGS) is described as well. Applications of GAPLS and GARGS to QSAR and 3D-QSAR problems are shown with some representative examples. GA can be hybridized with nonlinear modeling methods such as artificial neural networks (ANN) for providing useful tools in chemometric and QSAR.
Estimation of financial loss ratio for E-insurance:a quantitative model
Institute of Scientific and Technical Information of China (English)
钟元生; 陈德人; 施敏华
2002-01-01
In view of the risk of E-commerce and the response of the insurance industry to it, this paper is aimed at one important point of insurance, that is, estimation of financial loss ratio, which is one of the most difficult problems facing the E-insurance industry. This paper proposes a quantitative analyzing model for estimating E-insurance financial loss ratio. The model is based on gross income per enterprise and CSI/FBI computer crime and security survey. The analysis results presented are reasonable and valuable for both insurer and the insured and thus can be accepted by both of them. What we must point out is that according to our assumption, the financial loss ratio varied very little, 0.233% in 1999 and 0.236% in 2000 although there was much variation in the main data of the CSI/FBI survey.
Levashov, P R; Sin'ko, G V; Smirnov, N A; Minakov, D V; Shemyakin, O P; Khishchenko, K V
2010-12-22
In the present work, we compare the thermal contribution of electrons to thermodynamic functions of metals in different models at high densities and electron temperatures. One of the theoretical approaches, the full-potential linear-muffin-tin-orbital method, treats all electrons in the framework of density functional theory (DFT). The other approach, VASP, uses projector-augmented-wave pseudopotentials for the core electrons and considers the valent electrons also in the context of DFT. We analyze the limitations of the pseudopotential approach and compare the DFT results with a finite-temperature Thomas-Fermi model and two semiempirical equations of state.
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case.
A quantitative quasispecies theory-based model of virus escape mutation under immune selection.
Woo, Hyung-June; Reifman, Jaques
2012-08-07
Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.
Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang
2010-10-01
Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.
Liu, L.; Hu, J.; Zhou, Q.
2016-12-01
The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures
Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.
Directory of Open Access Journals (Sweden)
Alexey A Gritsenko
2015-08-01
Full Text Available Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP, a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.
Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.
Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick
2015-08-01
Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.
Simeon, Tomekia M.; Ratner, Mark A.; Schatz, George C.
2013-01-01
The design and assembly of mechanically interlocked molecules, such as catenanes and rotaxanes, are dictated by various types of noncovalent interactions. In particular, [C-H⋯O] hydrogen-bonding and π-π stacking interactions in these supramolecular complexes have been identified as important noncovalent interactions. With this in mind, we examined the [3] catenane 2·4PF6 using molecular mechanics (MM3), ab initio methods (HF, MP2), several versions of density functional theory (DFT) (B3LYP, M0X), and the dispersion-corrected method DFT-D3. Symmetry adapted perturbation theory (DFT-SAPT) provides the highest level of theory considered, and we use the DFT-SAPT results both to calibrate the other electronic structure methods, and the empirical potential MM3 force field that is often used to describe larger catenane and rotaxane structures where [C-H⋯O] hydrogen-bonding and π-π stacking interactions play a role. Our results indicate that the MM3 calculated complexation energies agree qualitatively with the energetic ordering from DFT-SAPT calculations with an aug-cc-pVTZ basis, both for structures dominated by [C-H⋯O] hydrogen-bonding and π-π stacking interactions. When the DFT-SAPT energies are decomposed into components, and we find that electrostatic interactions dominate the [C-H⋯O] hydrogen-bonding interactions while dispersion makes a significant contribution to π-π stacking. Another important conclusion is that DFT-D3 based on M06 or M06-2X provides interactions energies that are in near-quantitative agreement with DFT-SAPT. DFT results without the D3 correct have important differences compared to DFT-SAPT while HF and even MP2 results are in poor agreement with DFT-SAPT. PMID:23941280
Simeon, Tomekia M; Ratner, Mark A; Schatz, George C
2013-08-22
The design and assembly of mechanically interlocked molecules, such as catenanes and rotaxanes, are dictated by various types of noncovalent interactions. In particular, [C-H···O] hydrogen-bonding and π-π stacking interactions in these supramolecular complexes have been identified as important noncovalent interactions. With this in mind, we examined the [3]catenane 2·4PF6 using molecular mechanics (MM3), ab initio methods (HF, MP2), several versions of density functional theory (DFT) (B3LYP, M0X), and the dispersion-corrected method DFT-D3. Symmetry adapted perturbation theory (DFT-SAPT) provides the highest level of theory considered, and we use the DFT-SAPT results both to calibrate the other electronic structure methods, and the empirical potential MM3 force field that is often used to describe larger catenane and rotaxane structures where [C-H···O] hydrogen-bonding and π-π stacking interactions play a role. Our results indicate that the MM3 calculated complexation energies agree qualitatively with the energetic ordering from DFT-SAPT calculations with an aug-cc-pVTZ basis, both for structures dominated by [C-H···O] hydrogen-bonding and π-π stacking interactions. When the DFT-SAPT energies are decomposed into components, we find that electrostatic interactions dominate the [C-H···O] hydrogen-bonding interactions, while dispersion makes a significant contribution to π-π stacking. Another important conclusion is that DFT-D3 based on M06 or M06-2X provides interaction energies that are in near-quantitative agreement with DFT-SAPT. DFT results without the D3 correction have important differences compared to DFT-SAPT, while HF and even MP2 results are in poor agreement with DFT-SAPT.
Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015.
Directory of Open Access Journals (Sweden)
Pawel Sobkowicz
Full Text Available We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions-which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be 'invaded' by a newcomer third party very quickly, while the second remains immune to such invasion.
Quantitative MR application in depression model of rats: a preliminary study
Institute of Scientific and Technical Information of China (English)
Wei Wang; Wenxun Li; Fang Fang; Hao Lei; Xiaoping Yin; Jianpin Qi; Baiseng Wang; Chengyuan Wang
2005-01-01
Objective: To investigate findings and value of quantitative MR in depression model of rats. Methods: Twenty male SD rats were divided into model group and control group randomly (10 rats in each group). The depression model of rats was erected by separation and chronic unpredictable stress. The behavior of rat was detected by open-field test and sucrose consumption. The MR images of brain tissues were produced in vivo rats with T2-and diffusion-weighted imaging. The changes of body weight and behavior score and thevalues of T2 and ADC of ROIs were compared between the two groups. Histological verification of hippocampal neuron damage was alsoperformed under ultramicrosopy. Results: Compared with the control group, T2 values in hippocampus prolonged 5.5 % ( P ＜ 0.05),ADC values in hippocampus and in temporal lobe cortex decreased 11.7 % and 10.9% (P ＜ 0.01)respectively in the model group. Histo-logic data confirmed severe neuronal damage in the hippocampus of the model group. Conclusion: This study capitalized on diffusion-weighted imaging as a sensitive technique for the identification of neuronal damage in depression and it provides an experimental evidence ofMRI in depression investigation and clinical application.
Directory of Open Access Journals (Sweden)
Aaron Smith
2014-12-01
Full Text Available The accurate characterization of three-dimensional (3D root architecture, volume, and biomass is important for a wide variety of applications in forest ecology and to better understand tree and soil stability. Technological advancements have led to increasingly more digitized and automated procedures, which have been used to more accurately and quickly describe the 3D structure of root systems. Terrestrial laser scanners (TLS have successfully been used to describe aboveground structures of individual trees and stand structure, but have only recently been applied to the 3D characterization of whole root systems. In this study, 13 recently harvested Norway spruce root systems were mechanically pulled from the soil, cleaned, and their volumes were measured by displacement. The root systems were suspended, scanned with TLS from three different angles, and the root surfaces from the co-registered point clouds were modeled with the 3D Quantitative Structure Model to determine root architecture and volume. The modeling procedure facilitated the rapid derivation of root volume, diameters, break point diameters, linear root length, cumulative percentages, and root fraction counts. The modeled root systems underestimated root system volume by 4.4%. The modeling procedure is widely applicable and easily adapted to derive other important topological and volumetric root variables.
A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait
Directory of Open Access Journals (Sweden)
Damgaard Lars
2005-12-01
Full Text Available Abstract With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The twoWeibull baseline parameters were updated jointly using a Metropolis-Hastingstep. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.
A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait.
Damgaard, Lars Holm; Korsgaard, Inge Riis
2006-01-01
With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The two Weibull baseline parameters were updated jointly using a Metropolis-Hasting step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.
Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico
2010-09-01
Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola
2013-08-15
Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. Copyright © 2013 Elsevier B.V. All rights reserved.
Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering
2014-06-01
model of the system (Friendenthal, Moore and Steiner 2008, 17). The premise is that maintaining a logical and consistent model can be accomplished...Standard for Exchange of Product data (STEP) subgroup of ISO, and defines a standard data format for certain types of SE information ( Johnson 2006...search.credoreference.com/content/entry/encyccs/formal_languages/0. Friedenthal, Sanford, Alan Moore, and Rick Steiner . 2008. A Practical Guide to SysML
Directory of Open Access Journals (Sweden)
Wu Wei-Zhong
2011-01-01
Full Text Available Abstract Background Antiangiogenesis is a promising therapy for advanced hepatocellular carcinoma (HCC, but the effects are difficult to be evaluated. Pazopanib (GW786034B is a pan-vascular endothelial growth factor receptor inhibitor, the antitumor effects or antiangiogenic effects haven't been investigated in HCC. Methods In vitro direct effects of pazopanib on human HCC cell lines and endothelial cells were evaluated. In vivo antitumor effects were evaluated in three xenograft nude mice models. In the subcutaneous HCCLM3 model, intratumoral blood perfusion was detected by contrast-enhanced ultrasonography (CEUS, and serial quantitative parameters were profiled from the time-intensity curves of ultrasonograms. Results In vitro proliferation of various HCC cell lines were not inhibited by pazopanib. Pazopanib inhibited migration and invasion and induced apoptosis significantly in two HCC cell lines, HCCLM3 and PLC/PRF/5. Proliferation, migration, and tubule formation of human umbilical vein endothelial cells were inhibited by pazopanib in a dose-dependent manner. In vivo tumor growth was significantly inhibited by pazopanib in HCCLM3, HepG2, and PLC/PRF/5 xenograft models. Various intratumoral perfusion parameters changed over time, and the signal intensity was significantly impaired in the treated tumors before the treatment efficacy on tumor size could be observed. Mean transit time of the contrast media in hotspot areas of the tumors was reversely correlated with intratumoral microvessel density. Conclusions Antitumor effects of pazopanib in HCC xenografts may owe to its antiangiogenic effects, and the in vivo antiangiogenic effects could be evaluated by quantitative CEUS.
Directory of Open Access Journals (Sweden)
Gao Shouguo
2011-08-01
Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.
A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one
Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.
2011-01-01
The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,
Climate change and dengue: a critical and systematic review of quantitative modelling approaches.
Naish, Suchithra; Dale, Pat; Mackenzie, John S; McBride, John; Mengersen, Kerrie; Tong, Shilu
2014-03-26
Many studies have found associations between climatic conditions and dengue transmission. However, there is a debate about the future impacts of climate change on dengue transmission. This paper reviewed epidemiological evidence on the relationship between climate and dengue with a focus on quantitative methods for assessing the potential impacts of climate change on global dengue transmission. A literature search was conducted in October 2012, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search focused on peer-reviewed journal articles published in English from January 1991 through October 2012. Sixteen studies met the inclusion criteria and most studies showed that the transmission of dengue is highly sensitive to climatic conditions, especially temperature, rainfall and relative humidity. Studies on the potential impacts of climate change on dengue indicate increased climatic suitability for transmission and an expansion of the geographic regions at risk during this century. A variety of quantitative modelling approaches were used in the studies. Several key methodological issues and current knowledge gaps were identified through this review. It is important to assemble spatio-temporal patterns of dengue transmission compatible with long-term data on climate and other socio-ecological changes and this would advance projections of dengue risks associated with climate change.
Fourier Series, the DFT and Shape Modelling
DEFF Research Database (Denmark)
Skoglund, Karl
2004-01-01
This report provides an introduction to Fourier series, the discrete Fourier transform, complex geometry and Fourier descriptors for shape analysis. The content is aimed at undergraduate and graduate students who wish to learn about Fourier analysis in general, as well as its application to shape...
Shrestha, Kushal; Virgil, Kyle A; Jakubikova, Elena
2016-07-28
Tetrapyrrole-based pigments play a crucial role in photosynthesis as principal light absorbers in light-harvesting chemical systems. As such, accurate theoretical descriptions of the electronic absorption spectra of these pigments will aid in the proper description and understanding of the overall photophysics of photosynthesis. In this work, time-dependent density functional theory (TD-DFT) at the CAM-B3LYP/6-31G* level of theory is employed to produce the theoretical absorption spectra of several tetrapyrrole-based pigments. However, the application of TD-DFT to large systems with several hundreds of atoms can become computationally prohibitive. Therefore, in this study, TD-DFT calculations with reduced orbital spaces (ROSs) that exclude portions of occupied and virtual orbitals are pursued as a viable, computationally cost-effective alternative to conventional TD-DFT calculations. The effects of reducing orbital space size on theoretical spectra are qualitatively and quantitatively described, and both conventional and ROS results are benchmarked against experimental absorption spectra of various tetrapyrrole-based pigments. The orbital reduction approach is also applied to a large natural pigment assembly that comprises the principal light-absorbing component of the reaction center in purple bacteria. Overall, we find that TD-DFT calculations with proper and judicious orbital space reductions can adequately reproduce conventional, full orbital space, TD-DFT results of all pigments studied in this work.
Quantitative Models of the Dose-Response and Time Course of Inhalational Anthrax in Humans
Schell, Wiley A.; Bulmahn, Kenneth; Walton, Thomas E.; Woods, Christopher W.; Coghill, Catherine; Gallegos, Frank; Samore, Matthew H.; Adler, Frederick R.
2013-01-01
Anthrax poses a community health risk due to accidental or intentional aerosol release. Reliable quantitative dose-response analyses are required to estimate the magnitude and timeline of potential consequences and the effect of public health intervention strategies under specific scenarios. Analyses of available data from exposures and infections of humans and non-human primates are often contradictory. We review existing quantitative inhalational anthrax dose-response models in light of criteria we propose for a model to be useful and defensible. To satisfy these criteria, we extend an existing mechanistic competing-risks model to create a novel Exposure–Infection–Symptomatic illness–Death (EISD) model and use experimental non-human primate data and human epidemiological data to optimize parameter values. The best fit to these data leads to estimates of a dose leading to infection in 50% of susceptible humans (ID50) of 11,000 spores (95% confidence interval 7,200–17,000), ID10 of 1,700 (1,100–2,600), and ID1 of 160 (100–250). These estimates suggest that use of a threshold to human infection of 600 spores (as suggested in the literature) underestimates the infectivity of low doses, while an existing estimate of a 1% infection rate for a single spore overestimates low dose infectivity. We estimate the median time from exposure to onset of symptoms (incubation period) among untreated cases to be 9.9 days (7.7–13.1) for exposure to ID50, 11.8 days (9.5–15.0) for ID10, and 12.1 days (9.9–15.3) for ID1. Our model is the first to provide incubation period estimates that are independently consistent with data from the largest known human outbreak. This model refines previous estimates of the distribution of early onset cases after a release and provides support for the recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses. PMID:24058320
Bayesian model choice and search strategies for mapping interacting quantitative trait Loci.
Yi, Nengjun; Xu, Shizhong; Allison, David B
2003-01-01
Most complex traits of animals, plants, and humans are influenced by multiple genetic and environmental factors. Interactions among multiple genes play fundamental roles in the genetic control and evolution of complex traits. Statistical modeling of interaction effects in quantitative trait loci (QTL) analysis must accommodate a very large number of potential genetic effects, which presents a major challenge to determining the genetic model with respect to the number of QTL, their positions, and their genetic effects. In this study, we use the methodology of Bayesian model and variable selection to develop strategies for identifying multiple QTL with complex epistatic patterns in experimental designs with two segregating genotypes. Specifically, we develop a reversible jump Markov chain Monte Carlo algorithm to determine the number of QTL and to select main and epistatic effects. With the proposed method, we can jointly infer the genetic model of a complex trait and the associated genetic parameters, including the number, positions, and main and epistatic effects of the identified QTL. Our method can map a large number of QTL with any combination of main and epistatic effects. Utility and flexibility of the method are demonstrated using both simulated data and a real data set. Sensitivity of posterior inference to prior specifications of the number and genetic effects of QTL is investigated. PMID:14573494
Hämmerling, Frank; Ladd Effio, Christopher; Andris, Sebastian; Kittelmann, Jörg; Hubbuch, Jürgen
2017-01-10
Precipitation of proteins is considered to be an effective purification method for proteins and has proven its potential to replace costly chromatography processes. Besides salts and polyelectrolytes, polymers, such as polyethylene glycol (PEG), are commonly used for precipitation applications under mild conditions. Process development, however, for protein precipitation steps still is based mainly on heuristic approaches and high-throughput experimentation due to a lack of understanding of the underlying mechanisms. In this work we apply quantitative structure-activity relationships (QSARs) to model two parameters, the discontinuity point m* and the β-value, that describe the complete precipitation curve of a protein under defined conditions. The generated QSAR models are sensitive to the protein type, pH, and ionic strength. It was found that the discontinuity point m* is mainly dependent on protein molecular structure properties and electrostatic surface properties, whereas the β-value is influenced by the variance in electrostatics and hydrophobicity on the protein surface. The models for m* and the β-value exhibit a good correlation between observed and predicted data with a coefficient of determination of R(2)≥0.90 and, hence, are able to accurately predict precipitation curves for proteins. The predictive capabilities were demonstrated for a set of combinations of protein type, pH, and ionic strength not included in the generation of the models and good agreement between predicted and experimental data was achieved.
Quantitative structure-property relationship modeling of Grätzel solar cell dyes.
Venkatraman, Vishwesh; Åstrand, Per-Olof; Alsberg, Bjørn Kåre
2014-01-30
With fossil fuel reserves on the decline, there is increasing focus on the design and development of low-cost organic photovoltaic devices, in particular, dye-sensitized solar cells (DSSCs). The power conversion efficiency (PCE) of a DSSC is heavily influenced by the chemical structure of the dye. However, as far as we know, no predictive quantitative structure-property relationship models for DSSCs with PCE as one of the response variables have been reported. Thus, we report for the first time the successful application of comparative molecular field analysis (CoMFA) and vibrational frequency-based eigenvalue (EVA) descriptors to model molecular structure-photovoltaic performance relationships for a set of 40 coumarin derivatives. The results show that the models obtained provide statistically robust predictions of important photovoltaic parameters such as PCE, the open-circuit voltage (V(OC)), short-circuit current (J(SC)) and the peak absorption wavelength λ(max). Some of our findings based on the analysis of the models are in accordance with those reported in the literature. These structure-property relationships can be applied to the rational structural design and evaluation of new photovoltaic materials.
Nonlinear quantitative radiation sensitivity prediction model based on NCI-60 cancer cell lines.
Zhang, Chunying; Girard, Luc; Das, Amit; Chen, Sun; Zheng, Guangqiang; Song, Kai
2014-01-01
We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT) related genes were selected by significance analysis of microarrays (SAM). Orthogonal latent variables (LVs) were then extracted by the partial least squares (PLS) method as the new compressive input variables. Finally, support vector machine (SVM) regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray) values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a) reducing the root mean square error (RMSE) of the radiation sensitivity prediction model from 0.20 to 0.011; and (b) improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.
Nonlinear Quantitative Radiation Sensitivity Prediction Model Based on NCI-60 Cancer Cell Lines
Directory of Open Access Journals (Sweden)
Chunying Zhang
2014-01-01
Full Text Available We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT related genes were selected by significance analysis of microarrays (SAM. Orthogonal latent variables (LVs were then extracted by the partial least squares (PLS method as the new compressive input variables. Finally, support vector machine (SVM regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a reducing the root mean square error (RMSE of the radiation sensitivity prediction model from 0.20 to 0.011; and (b improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.
DEFF Research Database (Denmark)
Birkel, Garrett W.; Ghosh, Amit; Kumar, Vinay S.
2017-01-01
analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed.Results: The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes...
DFT CONFORMATIONAL STUDIES OF ALPHA-MALTOTRIOSE
Recent DFT optimization studies on alpha-maltose improved our understanding of the preferred conformations of alpha-maltose and the present study extends these studies to alpha-maltotriose with three alpha-D-glucopyranose residues linked by two alpha-[1-4] bridges, denoted herein as DP-3's. Combina...
z-transform DFT filters and FFT's
DEFF Research Database (Denmark)
Bruun, G.
1978-01-01
of DFT filter banks which utilize a minimum of complex coefficients. These implementations lead to new forms of FFT's, among which is acos/sinFFT for a real signal which only employs real coefficients. The new FFT algorithms use only half as many real multiplications as does the classical FFT....
Noise Tracking Using DFT Domain Subspace Decompositions
Hendriks, R.C.; Jensen, J.; Heusdens, R.
2008-01-01
All discrete Fourier transform (DFT) domain-based speech enhancement gain functions rely on knowledge of the noise power spectral density (PSD). Since the noise PSD is unknown in advance, estimation from the noisy speech signal is necessary. An overestimation of the noise PSD will lead to a loss in
DFT STUDIES OF DP-3 AMYLOSE FRAGMENTS
This study extends our work on mono- and disaccharides to structures with three glucose residues by two alpha-[1-4] bridges, denoted herein as DP-3's. DFT optimization studies of DP-3 fragments have been carried out at the B3LYP/6-311++G** level of theory. Different hydroxymethyl conformations (gg...
Noise Tracking Using DFT Domain Subspace Decompositions
Hendriks, R.C.; Jensen, J.; Heusdens, R.
2008-01-01
All discrete Fourier transform (DFT) domain-based speech enhancement gain functions rely on knowledge of the noise power spectral density (PSD). Since the noise PSD is unknown in advance, estimation from the noisy speech signal is necessary. An overestimation of the noise PSD will lead to a loss in
Birkel, Garrett W; Ghosh, Amit; Kumar, Vinay S; Weaver, Daniel; Ando, David; Backman, Tyler W H; Arkin, Adam P; Keasling, Jay D; Martín, Héctor García
2017-04-05
Modeling of microbial metabolism is a topic of growing importance in biotechnology. Mathematical modeling helps provide a mechanistic understanding for the studied process, separating the main drivers from the circumstantial ones, bounding the outcomes of experiments and guiding engineering approaches. Among different modeling schemes, the quantification of intracellular metabolic fluxes (i.e. the rate of each reaction in cellular metabolism) is of particular interest for metabolic engineering because it describes how carbon and energy flow throughout the cell. In addition to flux analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed. The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes and leveraging other -omics data for the scientific study of cellular metabolism and bioengineering purposes. Firstly, it presents a complete toolbox for simultaneously performing two different types of flux analysis that are typically disjoint: Flux Balance Analysis and (13)C Metabolic Flux Analysis. Moreover, it introduces the capability to use (13)C labeling experimental data to constrain comprehensive genome-scale models through a technique called two-scale (13)C Metabolic Flux Analysis (2S-(13)C MFA). In addition, the library includes a demonstration of a method that uses proteomics data to produce actionable insights to increase biofuel production. Finally, the use of the jQMM library is illustrated through the addition of several Jupyter notebook demonstration files that enhance reproducibility and provide the capability to be adapted to the user's specific needs. jQMM will facilitate the design and metabolic engineering of organisms for biofuels and other chemicals, as well as investigations of cellular metabolism and leveraging -omics data. As an open source software project, we hope it
Directory of Open Access Journals (Sweden)
Dmitry M. Yershov
2012-12-01
Full Text Available This paper proposes the method to obtain values of the coefficients of cause-effect relationships between strategic objectives in the form of intervals and use them in solving the problem of the optimal allocation of organization’s resources. We suggest taking advantage of the interval analytical hierarchy process for obtaining the ntervals. The quantitative model of strategic performance developed by M. Hell, S. Vidučić and Ž. Garača is employed for finding the optimal resource allocation. The uncertainty originated in the optimization problem as a result of interval character of the cause-effect relationship coefficients is eliminated through the application of maximax and maximin criteria. It is shown that the problem of finding the optimal maximin, maximax, and compromise resource allocation can be represented as a mixed 0-1 linear programming problem. Finally, numerical example and directions for further research are given.
Reaction pathways of the dissociation of methylal: A DFT study
Energy Technology Data Exchange (ETDEWEB)
Frey, H.-M.; Beaud, P.; Gerber, T.; Mischler, B.; Radi, P.P.; Tzannis, A.-P. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1999-08-01
Schemata for modelling combustion processes do not yet include reaction rates for oxygenated fuels like methylal (DMM) which is considered as an additive or replacement for diesel due to its low sooting propensity. Density functional theory (DFT) studies of the possible reaction pathways for different dissociation steps of methylal are presented. Cleavage of a hydrogen bond to the methoxy group or the central carbon atom were simulated at the BLYP/6-311++G{sup **} level of theory. The results are compared to the experiment when dissociating and/or ionising DMM with femtosecond pulses. (author) 1 fig., 1 tab., 1 ref.
DFT studies of CNT-functionalized uracil-acetate hybrids
Mirzaei, Mahmoud; Gulseren, Oguz
2015-09-01
Calculations based on density functional theory (DFT) have been performed to investigate the stabilities and properties of hybrid structures consisting of a molecular carbon nanotube (CNT) and uracil acetate (UA) counterparts. The investigated models have been relaxed to minimum energy structures and then various physical properties and nuclear magnetic resonance (NMR) properties have been evaluated. The results indicated the effects of functionalized CNT on the properties of hybrids through comparing the results of hybrids and individual structures. The oxygen atoms of uracil counterparts have been seen as the detection points of properties for the CNT-UA hybrids.
Energy Technology Data Exchange (ETDEWEB)
Mintun, M.A.; Raichle, M.E.; Kilbourn, M.R.; Wooten, G.F.; Welch, M.J.
1984-03-01
We propose an in vivo method for use with positron emission tomography (PET) that results in a quantitative characterization of neuroleptic binding sites using radiolabeled spiperone. The data are analyzed using a mathematical model that describes transport, nonspecific binding, and specific binding in the brain. The model demonstrates that the receptor quantities Bmax (i.e., the number of binding sites) and KD-1 (i.e., the binding affinity) are not separably ascertainable with tracer methodology in human subjects. We have, therefore, introduced a new term, the binding potential, equivalent to the product BmaxKD-1, which reflects the capacity of a given tissue, or region of a tissue, for ligand-binding site interaction. The procedure for obtaining these measurements is illustrated with data from sequential PET scans of baboons after intravenous injection of carrier-added (18F)spiperone. From these data we estimate the brain tissue nonspecific binding of spiperone to be in the range of 94.2 to 95.3%, and the regional brain spiperone permeability (measured as the permeability-surface area product) to be in the range of 0.025 to 0.036 cm3/(s X ml). The binding potential of the striatum ranged from 17.4 to 21.6; these in vivo estimates compare favorably to in vitro values in the literature. To our knowledge this represents the first direct evidence that PET can be used to characterize quantitatively, locally and in vivo, drug binding sites in brain. The ability to make such measurements with PET should permit the detailed investigation of diseases thought to result from disorders of receptor function.
Authorship of scientific articles within an ethical-legal framework: quantitative model
Directory of Open Access Journals (Sweden)
Martha Y. Vallejo
2012-12-01
Full Text Available Determining authorship and the order of authorship in scientific papers, in modern interdisciplinary and interinstitutional science, has become complex at a legal and ethical level. Failure to define authorship before or during the research, creates subsequent problems for those considered authors of a publication or lead authors of a work, particularly so, once the project or manuscript is completed. This article proposes a quantitative and qualitative model to determine authorship within a scientific, ethical and legal frame. The principles used for the construction of this design are based on 2 criteria: a stages of research and scientific method involving: 1. Planning and development of the research project, 2. Design and data collection, 3. Presentation of results, 4. Interpretation of results, 5. Manuscript preparation to disseminate new knowledge to the scientific community, 6. Administration and management, and b weighting coefficients in each phase, to decide on authorship and ownership of the work. The model also considers and distinguishes whether the level and activity performed during the creation of the work and the diffusion of knowledge is an intellectual or practical contribution; this distinction both contrasts and complements the elements protected by copyright laws. The format can be applied a priori and a posteriori to the completion of a project or manuscript and can conform to any research and publication. The use of this format will quantitatively resolve: 1. The order of authorship (first author and co-author order, 2. Determine the inclusion and exclusion of contributors, taking into account ethical and legal principles, and 3. Percentages of economic rights for each authors.
Perspective: Treating electron over-delocalization with the DFT+U method.
Kulik, Heather J
2015-06-28
Many people in the materials science and solid-state community are familiar with the acronym "DFT+U." For those less familiar, this technique uses ideas from model Hamiltonians that permit the description of both metals and insulators to address problems of electron over-delocalization in practical implementations of density functional theory (DFT). Exchange-correlation functionals in DFT are often described as belonging to a hierarchical "Jacob's ladder" of increasing accuracy in moving from local to non-local descriptions of exchange and correlation. DFT+U is not on this "ladder" but rather acts as an "elevator" because it systematically tunes relative energetics, typically on a localized subshell (e.g., d or f electrons), regardless of the underlying functional employed. However, this tuning is based on a metric of the local electron density of the subshells being addressed, thus necessitating physical or chemical or intuition about the system of interest. I will provide a brief overview of the history of how DFT+U came to be starting from the origin of the Hubbard and Anderson model Hamiltonians. This history lesson is necessary because it permits us to make the connections between the "Hubbard U" and fundamental outstanding challenges in electronic structure theory, and it helps to explain why this method is so widely applied to transition-metal oxides and organometallic complexes alike.
Wilson, Robert H.; Dooley, Kathryn A.; Morris, Michael D.; Mycek, Mary-Ann
2009-02-01
Light-scattering spectroscopy has the potential to provide information about bone composition via a fiber-optic probe placed on the skin. In order to design efficient probes, one must understand the effect of all tissue layers on photon transport. To quantitatively understand the effect of overlying tissue layers on the detected bone Raman signal, a layered Monte Carlo model was modified for Raman scattering. The model incorporated the absorption and scattering properties of three overlying tissue layers (dermis, subdermis, muscle), as well as the underlying bone tissue. The attenuation of the collected bone Raman signal, predominantly due to elastic light scattering in the overlying tissue layers, affected the carbonate/phosphate (C/P) ratio by increasing the standard deviation of the computational result. Furthermore, the mean C/P ratio varied when the relative thicknesses of the layers were varied and the elastic scattering coefficient at the Raman scattering wavelength of carbonate was modeled to be different from that at the Raman scattering wavelength of phosphate. These results represent the first portion of a computational study designed to predict optimal probe geometry and help to analyze detected signal for Raman scattering experiments involving bone.
Energy Technology Data Exchange (ETDEWEB)
Cheon, Jung Eun; Yoo, Won Joon; Kim, In One; Kim, Woo Sun; Choi, Young Hun [Seoul National University College of Medicine, Seoul (Korea, Republic of)
2015-06-15
To investigate the usefulness of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion MRI for the evaluation of femoral head ischemia. Unilateral femoral head ischemia was induced by selective embolization of the medial circumflex femoral artery in 10 piglets. All MRIs were performed immediately (1 hour) and after embolization (1, 2, and 4 weeks). Apparent diffusion coefficients (ADCs) were calculated for the femoral head. The estimated pharmacokinetic parameters (Kep and Ve from two-compartment model) and semi-quantitative parameters including peak enhancement, time-to-peak (TTP), and contrast washout were evaluated. The epiphyseal ADC values of the ischemic hip decreased immediately (1 hour) after embolization. However, they increased rapidly at 1 week after embolization and remained elevated until 4 weeks after embolization. Perfusion MRI of ischemic hips showed decreased epiphyseal perfusion with decreased Kep immediately after embolization. Signal intensity-time curves showed delayed TTP with limited contrast washout immediately post-embolization. At 1-2 weeks after embolization, spontaneous reperfusion was observed in ischemic epiphyses. The change of ADC (p = 0.043) and Kep (p = 0.043) were significantly different between immediate (1 hour) after embolization and 1 week post-embolization. Diffusion MRI and pharmacokinetic model obtained from the DCE-MRI are useful in depicting early changes of perfusion and tissue damage using the model of femoral head ischemia in skeletally immature piglets.
Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F
2014-10-07
Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA.
A quantitative model for using acridine orange as a transmembrane pH gradient probe.
Clerc, S; Barenholz, Y
1998-05-15
Monitoring the acidification of the internal space of membrane vesicles by proton pumps can be achieved easily with optical probes. Transmembrane pH gradients cause a blue-shift in the absorbance spectrum and the quenching of the fluorescence of the cationic dye acridine orange. It has been postulated that these changes are caused by accumulation and aggregation of the dye inside the vesicles. We tested this hypothesis using liposomes with transmembrane concentration gradients of ammonium sulfate as model system. Fluorescence intensity of acridine orange solutions incubated with liposomes was affected by magnitude of the gradient, volume trapped by vesicles, and temperature. These experimental data were compared to a theoretical model describing the accumulation of acridine orange monomers in the vesicles according to the inside-to-outside ratio of proton concentrations, and the intravesicular formation of sandwich-like piles of acridine orange cations. This theoretical model predicted quantitatively the relationship between the transmembrane pH gradients and spectral changes of acridine orange. Therefore, adequate characterization of aggregation of dye in the lumen of biological vesicles provides the theoretical basis for using acridine orange as an optical probe to quantify transmembrane pH gradients.
Guo, Jing; Lin, Feng; Zhang, Xiaomeng; Tanavde, Vivek; Zheng, Jie
2017-05-15
Waddington's epigenetic landscape is a powerful metaphor for cellular dynamics driven by gene regulatory networks (GRNs). Its quantitative modeling and visualization, however, remains a challenge, especially when there are more than two genes in the network. A software tool for Waddington's landscape has not been available in the literature. We present NetLand, an open-source software tool for modeling and simulating the kinetic dynamics of GRNs, and visualizing the corresponding Waddington's epigenetic landscape in three dimensions without restriction on the number of genes in a GRN. With an interactive and graphical user interface, NetLand can facilitate the knowledge discovery and experimental design in the study of cell fate regulation (e.g. stem cell differentiation and reprogramming). NetLand can run under operating systems including Windows, Linux and OS X. The executive files and source code of NetLand as well as a user manual, example models etc. can be downloaded from http://netland-ntu.github.io/NetLand/ . zhengjie@ntu.edu.sg. Supplementary data are available at Bioinformatics online.
Quantitative models of hydrothermal fluid-mineral reaction: The Ischia case
Di Napoli, Rossella; Federico, Cinzia; Aiuppa, Alessandro; D'Antonio, Massimo; Valenza, Mariano
2013-03-01
The intricate pathways of fluid-mineral reactions occurring underneath active hydrothermal systems are explored in this study by applying reaction path modelling to the Ischia case study. Ischia Island, in Southern Italy, hosts a well-developed and structurally complex hydrothermal system which, because of its heterogeneity in chemical and physical properties, is an ideal test sites for evaluating potentialities/limitations of quantitative geochemical models of hydrothermal reactions. We used the EQ3/6 software package, version 7.2b, to model reaction of infiltrating waters (mixtures of meteoric water and seawater in variable proportions) with Ischia's reservoir rocks (the Mount Epomeo Green Tuff units; MEGT). The mineral assemblage and composition of such MEGT units were initially characterised by ad hoc designed optical microscopy and electron microprobe analysis, showing that phenocrysts (dominantly alkali-feldspars and plagioclase) are set in a pervasively altered (with abundant clay minerals and zeolites) groundmass. Reaction of infiltrating waters with MEGT minerals was simulated over a range of realistic (for Ischia) temperatures (95-260 °C) and CO2 fugacities (10-0.2 to 100.5) bar. During the model runs, a set of secondary minerals (selected based on independent information from alteration minerals' studies) was allowed to precipitate from model solutions, when saturation was achieved. The compositional evolution of model solutions obtained in the 95-260 °C runs were finally compared with compositions of Ischia's thermal groundwaters, demonstrating an overall agreement. Our simulations, in particular, well reproduce the Mg-depleting maturation path of hydrothermal solutions, and have end-of-run model solutions whose Na-K-Mg compositions well reflect attainment of full-equilibrium conditions at run temperature. High-temperature (180-260 °C) model runs are those best matching the Na-K-Mg compositions of Ischia's most chemically mature water samples
Digital Repository Service at National Institute of Oceanography (India)
Chakraborty, B.
For quantitative seafloor roughness characterization and classification using multi-beam processed backscatter data, a good correlation is indicated among the power law parameters (composite roughness model) and hybrid ANN architecture results...
Gao, Y.; Balaram, P.; Islam, S.
2009-12-01
, the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.
Dissociation curves of diatomic molecules: A DC-DFT study
Energy Technology Data Exchange (ETDEWEB)
Sim, Eunji; Kim, Min-Cheol [Department of Chemistry and Institute of Nano-Bio Molecular Assemblies, Yonsei University, Seoul 120-749 (Korea, Republic of); Burke, Kieron [Department of Chemistry, University of California, Irvine, CA, 92697 (United States)
2015-12-31
We investigate dissociation of diatomic molecules using standard density functional theory (DFT) and density-corrected density functional theory (DC-DFT) compared with CCSD(T) results as reference. The results show the difference between the HOMO values of dissociated atomic species often can be used as an indicator whether DFT would predict the correct dissociation limit. DFT predicts incorrect dissociation limits and charge distribution in molecules or molecular ions when the fragments have large HOMO differences, while DC-DFT and CCSD(T) do not. The criteria for large HOMO difference is about 2 ∼ 4 eV.
Directory of Open Access Journals (Sweden)
Yubo Hou
Full Text Available Quantitative real-time PCR (qPCR has become a gold standard for the quantification of nucleic acids and microorganism abundances, in which plasmid DNA carrying the target genes are most commonly used as the standard. A recent study showed that supercoiled circular confirmation of DNA appeared to suppress PCR amplification. However, to what extent to which different structural types of DNA (circular versus linear used as the standard may affect the quantification accuracy has not been evaluated. In this study, we quantitatively compared qPCR accuracies based on circular plasmid (mostly in supercoiled form and linear DNA standards (linearized plasmid DNA or PCR amplicons, using proliferating cell nuclear gene (pcna, the ubiquitous eukaryotic gene, in five marine microalgae as a model gene. We observed that PCR using circular plasmids as template gave 2.65-4.38 more of the threshold cycle number than did equimolar linear standards. While the documented genome sequence of the diatom Thalassiosira pseudonana shows a single copy of pcna, qPCR using the circular plasmid as standard yielded an estimate of 7.77 copies of pcna per genome whereas that using the linear standard gave 1.02 copies per genome. We conclude that circular plasmid DNA is unsuitable as a standard, and linear DNA should be used instead, in absolute qPCR. The serious overestimation by the circular plasmid standard is likely due to the undetected lower efficiency of its amplification in the early stage of PCR when the supercoiled plasmid is the dominant template.
Singh, Kunwar P; Gupta, Shikha; Basant, Nikita; Mohan, Dinesh
2014-09-15
Pesticides are designed toxic chemicals for specific purposes and can harm nontarget species as well. The honey bee is considered a nontarget test species for toxicity evaluation of chemicals. Global QSTR (quantitative structure-toxicity relationship) models were established for qualitative and quantitative toxicity prediction of pesticides in honey bee (Apis mellifera) based on the experimental toxicity data of 237 structurally diverse pesticides. Structural diversity of the chemical pesticides and nonlinear dependence in the toxicity data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) QSTR models were constructed for classification (two and four categories) and function optimization problems using the toxicity end point in honey bees. The predictive power of the QSTR models was tested through rigorous validation performed using the internal and external procedures employing a wide series of statistical checks. In complete data, the PNN-QSTR model rendered a classification accuracy of 96.62% (two-category) and 95.57% (four-category), while the GRNN-QSTR model yielded a correlation (R(2)) of 0.841 between the measured and predicted toxicity values with a mean squared error (MSE) of 0.22. The results suggest the appropriateness of the developed QSTR models for reliably predicting qualitative and quantitative toxicities of pesticides in honey bee. Both the PNN and GRNN based QSTR models constructed here can be useful tools in predicting the qualitative and quantitative toxicities of the new chemical pesticides for regulatory purposes.
How plants manage food reserves at night: quantitative models and open questions
Directory of Open Access Journals (Sweden)
Antonio eScialdone
2015-03-01
Full Text Available In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources.
Deficiencies in quantitative precipitation forecasts. Sensitivity studies using the COSMO model
Energy Technology Data Exchange (ETDEWEB)
Dierer, Silke [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Meteotest, Bern (Switzerland); Arpagaus, Marco [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Seifert, Axel [Deutscher Wetterdienst, Offenbach (Germany); Avgoustoglou, Euripides [Hellenic National Meteorological Service, Hellinikon (Greece); Dumitrache, Rodica [National Meteorological Administration, Bucharest (Romania); Grazzini, Federico [Agenzia Regionale per la Protezione Ambientale Emilia Romagna, Bologna (Italy); Mercogliano, Paola [Italian Aerospace Research Center, Capua (Italy); Milelli, Massimo [Agenzia Regionale per la Protezione Ambientale Piemonte, Torino (Italy); Starosta, Katarzyna [Inst. of Meteorology and Water Management, Warsaw (Poland)
2009-12-15
The quantitative precipitation forecast (QPF) of the COSMO model, like of other models, reveals some deficiencies. The aim of this study is to investigate which physical and numerical schemes have the strongest impact on QPF and, thus, have the highest potential for improving QPF. Test cases are selected that are meant to reflect typical forecast errors in different countries. The 13 test cases fall into two main groups: overestimation of stratiform precipitation (6 cases) and underestimation of convective precipitation (5 cases). 22 sensitivity experiments predominantly regarding numerical and physical schemes are performed. The area averaged 24 h precipitation sums arc evaluated. The results show that the strongest impact on QPF is caused by changes of the initial atmospheric humidity and by using the Kain-Fritsch/Bechtold convection scheme instead of the Tiedtke scheme. Both sensitivity experiments change the area averaged precipitation in the range of 30-35%. This clearly shows that improved simulation of atmospheric water vapour is of utmost importance to achieve better precipitation forecasts. Significant changes are also caused by using the Runge-Kutta time integration scheme instead of the Leapfrog scheme, by applying a modified warm rain and snow physics scheme or a modified Tiedtke convection scheme. The fore-mentioned changes result in differences of area averaged precipitation of roughly 20%. Only for Greek lest cases, which all have a strong influence from the sea, the heat and moisture exchange between surface and atmosphere is of great importance and can cause changes of up to 20%. (orig.)
Yang, Hao; Cohen, Mitchell Jay; Chen, Wei; Sun, Ming-Wei; Lu, Charles Damien
2014-01-01
Spinal cord injury (SCI) is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR) screening, we identified 15 metabolites that made up an “Eigen-metabolome” capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD–NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use. PMID:24727691
Brantley, S J; Gufford, B T; Dua, R; Fediuk, D J; Graf, T N; Scarlett, Y V; Frederick, K S; Fisher, M B; Oberlies, N H; Paine, M F
2014-01-01
Herb–drug interaction predictions remain challenging. Physiologically based pharmacokinetic (PBPK) modeling was used to improve prediction accuracy of potential herb–drug interactions using the semipurified milk thistle preparation, silibinin, as an exemplar herbal product. Interactions between silibinin constituents and the probe substrates warfarin (CYP2C9) and midazolam (CYP3A) were simulated. A low silibinin dose (160 mg/day × 14 days) was predicted to increase midazolam area under the curve (AUC) by 1%, which was corroborated with external data; a higher dose (1,650 mg/day × 7 days) was predicted to increase midazolam and (S)-warfarin AUC by 5% and 4%, respectively. A proof-of-concept clinical study confirmed minimal interaction between high-dose silibinin and both midazolam and (S)-warfarin (9 and 13% increase in AUC, respectively). Unexpectedly, (R)-warfarin AUC decreased (by 15%), but this is unlikely to be clinically important. Application of this PBPK modeling framework to other herb–drug interactions could facilitate development of guidelines for quantitative prediction of clinically relevant interactions. PMID:24670388
Caruso, Rosario; Gambino, Grazia Laura; Scordino, Monica; Sabatino, Leonardo; Traulo, Pasqualino; Gagliano, Giacomo
2011-12-01
The influence of the wine distillation process on methanol content has been determined by quantitative analysis using gas chromatographic flame ionization (GC-FID) detection. A comparative study between direct injection of diluted wine and injection of distilled wine was performed. The distillation process does not affect methanol quantification in wines in proportions higher than 10%. While quantification performed on distilled samples gives more reliable results, a screening method for wine injection after a 1:5 water dilution could be employed. The proposed technique was found to be a compromise between the time consuming distillation process and direct wine injection. In the studied calibration range, the stability of the volatile compounds in the reference solution is concentration-dependent. The stability is higher in the less concentrated reference solution. To shorten the operation time, a stronger temperature ramp and carrier flow rate was employed. With these conditions, helium consumption and column thermal stress were increased. However, detection limits, calibration limits, and analytical method performances are not affected substantially by changing from normal to forced GC conditions. Statistical data evaluation were made using both ordinary (OLS) and bivariate least squares (BLS) calibration models. Further confirmation was obtained that limit of detection (LOD) values, calculated according to the 3sigma approach, are lower than the respective Hubaux-Vos (H-V) calculation method. H-V LOD depends upon background noise, calibration parameters and the number of reference standard solutions employed in producing the calibration curve. These remarks are confirmed by both calibration models used.
Toropova, A P; Toropov, A A; Benfenati, E; Gini, G; Leszczynska, D; Leszczynski, J
2011-09-01
For six random splits, one-variable models of rat toxicity (minus decimal logarithm of the 50% lethal dose [pLD50], oral exposure) have been calculated with CORAL software (http://www.insilico.eu/coral/). The total number of considered compounds is 689. New additional global attributes of the simplified molecular input line entry system (SMILES) have been examined for improvement of the optimal SMILES-based descriptors. These global SMILES attributes are representing the presence of some chemical elements and different kinds of chemical bonds (double, triple, and stereochemical). The "classic" scheme of building up quantitative structure-property/activity relationships and the balance of correlations (BC) with the ideal slopes were compared. For all six random splits, best prediction takes place if the aforementioned BC along with the global SMILES attributes are included in the modeling process. The average statistical characteristics for the external test set are the following: n = 119 ± 6.4, R(2) = 0.7371 ± 0.013, and root mean square error = 0.360 ± 0.037. Copyright © 2011 Wiley Periodicals, Inc.
Modeling and Quantitative Analysis of GNSS/INS Deep Integration Tracking Loops in High Dynamics
Directory of Open Access Journals (Sweden)
Yalong Ban
2017-09-01
Full Text Available To meet the requirements of global navigation satellite systems (GNSS precision applications in high dynamics, this paper describes a study on the carrier phase tracking technology of the GNSS/inertial navigation system (INS deep integration system. The error propagation models of INS-aided carrier tracking loops are modeled in detail in high dynamics. Additionally, quantitative analysis of carrier phase tracking errors caused by INS error sources is carried out under the uniform high dynamic linear acceleration motion of 100 g. Results show that the major INS error sources, affecting the carrier phase tracking accuracy in high dynamics, include initial attitude errors, accelerometer scale factors, gyro noise and gyro g-sensitivity errors. The initial attitude errors are usually combined with the receiver acceleration to impact the tracking loop performance, which can easily cause the failure of carrier phase tracking. The main INS error factors vary with the vehicle motion direction and the relative position of the receiver and the satellites. The analysis results also indicate that the low-cost micro-electro mechanical system (MEMS inertial measurement units (IMU has the ability to maintain GNSS carrier phase tracking in high dynamics.
Shuryak, Igor; Dadachova, Ekaterina
2016-01-01
Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental
Poppenga, Sandra; Worstell, Bruce B.
2016-01-01
Elevation data derived from light detection and ranging present challenges for hydrologic modeling as the elevation surface includes bridge decks and elevated road features overlaying culvert drainage structures. In reality, water is carried through these structures; however, in the elevation surface these features impede modeled overland surface flow. Thus, a hydrologically-enforced elevation surface is needed for hydrodynamic modeling. In the Delaware River Basin, hydrologic-enforcement techniques were used to modify elevations to simulate how constructed drainage structures allow overland surface flow. By calculating residuals between unfilled and filled elevation surfaces, artificially pooled depressions that formed upstream of constructed drainage structure features were defined, and elevation values were adjusted by generating transects at the location of the drainage structures. An assessment of each hydrologically-enforced drainage structure was conducted using field-surveyed culvert and bridge coordinates obtained from numerous public agencies, but it was discovered the disparate drainage structure datasets were not comprehensive enough to assess all remotely located depressions in need of hydrologic-enforcement. Alternatively, orthoimagery was interpreted to define drainage structures near each depression, and these locations were used as reference points for a quantitative hydrologic-enforcement assessment. The orthoimagery-interpreted reference points resulted in a larger corresponding sample size than the assessment between hydrologic-enforced transects and field-surveyed data. This assessment demonstrates the viability of rules-based hydrologic-enforcement that is needed to achieve hydrologic connectivity, which is valuable for hydrodynamic models in sensitive coastal regions. Hydrologic-enforced elevation data are also essential for merging with topographic/bathymetric elevation data that extend over vulnerable urbanized areas and dynamic coastal
Directory of Open Access Journals (Sweden)
Sette Alessandro
2005-05-01
Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.
A bivariate quantitative genetic model for a threshold trait and a survival trait
Directory of Open Access Journals (Sweden)
Damgaard Lars
2006-11-01
Full Text Available Abstract Many of the functional traits considered in animal breeding can be analyzed as threshold traits or survival traits with examples including disease traits, conformation scores, calving difficulty and longevity. In this paper we derive and implement a bivariate quantitative genetic model for a threshold character and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted in which model parameters were augmented with unobserved liabilities associated with the threshold trait. The fully conditional posterior distributions associated with parameters of the threshold trait reduced to well known distributions. For the survival trait the two baseline Weibull parameters were updated jointly by a Metropolis-Hastings step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. The Gibbs sampler was tested in a simulation study and illustrated in a joint analysis of calving difficulty and longevity of dairy cattle. The simulation study showed that the estimated marginal posterior distributions covered well and placed high density to the true values used in the simulation of data. The data analysis of calving difficulty and longevity showed that genetic variation exists for both traits. The additive genetic correlation was moderately favorable with marginal posterior mean equal to 0.37 and 95% central posterior credibility interval ranging between 0.11 and 0.61. Therefore, this study suggests that selection for improving one of the two traits will be beneficial for the other trait as well.
A bivariate quantitative genetic model for a threshold trait and a survival trait.
Damgaard, Lars Holm; Korsgaard, Inge Riis
2006-01-01
Many of the functional traits considered in animal breeding can be analyzed as threshold traits or survival traits with examples including disease traits, conformation scores, calving difficulty and longevity. In this paper we derive and implement a bivariate quantitative genetic model for a threshold character and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted in which model parameters were augmented with unobserved liabilities associated with the threshold trait. The fully conditional posterior distributions associated with parameters of the threshold trait reduced to well known distributions. For the survival trait the two baseline Weibull parameters were updated jointly by a Metropolis-Hastings step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. The Gibbs sampler was tested in a simulation study and illustrated in a joint analysis of calving difficulty and longevity of dairy cattle. The simulation study showed that the estimated marginal posterior distributions covered well and placed high density to the true values used in the simulation of data. The data analysis of calving difficulty and longevity showed that genetic variation exists for both traits. The additive genetic correlation was moderately favorable with marginal posterior mean equal to 0.37 and 95% central posterior credibility interval ranging between 0.11 and 0.61. Therefore, this study suggests that selection for improving one of the two traits will be beneficial for the other trait as well.
Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul
2017-02-01
Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Knapp, S J
1991-03-01
To maximize parameter estimation efficiency and statistical power and to estimate epistasis, the parameters of multiple quantitative trait loci (QTLs) must be simultaneously estimated. If multiple QTL affect a trait, then estimates of means of QTL genotypes from individual locus models are statistically biased. In this paper, I describe methods for estimating means of QTL genotypes and recombination frequencies between marker and quantitative trait loci using multilocus backcross, doubled haploid, recombinant inbred, and testcross progeny models. Expected values of marker genotype means were defined using no double or multiple crossover frequencies and flanking markers for linked and unlinked quantitative trait loci. The expected values for a particular model comprise a system of nonlinear equations that can be solved using an interative algorithm, e.g., the Gauss-Newton algorithm. The solutions are maximum likelihood estimates when the errors are normally distributed. A linear model for estimating the parameters of unlinked quantitative trait loci was found by transforming the nonlinear model. Recombination frequency estimators were defined using this linear model. Certain means of linked QTLs are less efficiently estimated than means of unlinked QTLs.
Ingeman-Nielsen, Thomas; Brandt, Inooraq
2010-05-01
permafrozen sediments is generally not available in Greenland, and mobilization costs are therefore considerable thus limiting the use of geotechnical borings to larger infrastructure and construction projects. To overcome these problems, we have tested the use of shallow Transient ElectroMagnetic (TEM) measurements, to provide constraints in terms of depth to and resistivity of the conductive saline layer. We have tested such a setup at two field sites in the Ilulissat area (mid-west Greenland), one with available borehole information (site A), the second without (site C). VES and TEM soundings were collected at each site and the respective data sets subsequently inverted using a mutually constrained inversion scheme. At site A, the TEM measurements (20x20m square loop, in-loop configuration) show substantial and repeatable negative amplitude segments, and therefore it has not presently been possible to provide a quantitative interpretation for this location. Negative segments are typically a sign of Induced Polarization or cultural effects. Forward modeling based on inversion of the VES data constrained with borehole information has indicated that IP effects could indeed be the cause of the observed anomaly, although such effects are not normally expected in permafrost or saline deposits. Data from site C has shown that jointly inverting the TEM and VES measurements does provide well determined estimates for all layer parameters except the thickness of the active layer and resistivity of the bedrock. The active layer thickness may be easily probed to provide prior information on this parameter, and the bedrock resistivity is of limited interest in technical applications. Although no confirming borehole information is available at this site, these results indicate that joint or mutually constrained inversion of TEM and VES data is feasible and that this setup may provide a fast and cost effective method for establishing quantitative interpretations of permafrost structure in
Energy Technology Data Exchange (ETDEWEB)
Seperant, Florian
2012-03-21
Aluminum coatings are a promising approach to protect magnesium alloys against corrosion and thereby making them accessible to a variety of technical applications. Thermal treatment enhances the adhesion of the aluminium coating on magnesium by interdiffusion. For a deeper understanding of the diffusion process at the interface, a quantitative description of the Al-Mg system is necessary. On the basis of diffusion experiments with infinite reservoirs of aluminum and magnesium, the interdiffusion coefficients of the intermetallic phases of the Al-Mg-system are calculated with the Sauer-Freise method for the first time. To solve contradictions in the literature concerning the intrinsic diffusion coefficients, the possibility of a bifurcation of the Kirkendall plane is considered. Furthermore, a physico-chemical description of interdiffusion is provided to interpret the observed phase transitions. The developed numerical model is based on a temporally varied discretization of the space coordinate. It exhibits excellent quantitative agreement with the experimentally measured concentration profile. This confirms the validity of the obtained diffusion coefficients. Moreover, the Kirkendall shift in the Al-Mg system is simulated for the first time. Systems with thin aluminum coatings on magnesium also exhibit a good correlation between simulated and experimental concentration profiles. Thus, the diffusion coefficients are also valid for Al-coated systems. Hence, it is possible to derive parameters for a thermal treatment by simulation, resulting in an optimized modification of the magnesium surface for technical applications.
Adaptive DFT-based Interferometer Fringe Tracking
Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.
2004-01-01
An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.
Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke
2015-03-02
Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The
Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an optionn for recreational water quality testi...
Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier
2016-03-01
Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical
Quantitative assessment of bone defect healing by multidetector CT in a pig model
Energy Technology Data Exchange (ETDEWEB)
Riegger, Carolin; Kroepil, Patric; Lanzman, Rotem S.; Miese, Falk R.; Antoch, Gerald; Scherer, Axel [University Duesseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, Duesseldorf (Germany); Jungbluth, Pascal; Hakimi, Mohssen; Wild, Michael [University Duesseldorf, Medical Faculty, Department of Traumatology and Hand Surgery, Duesseldorf (Germany); Hakimi, Ahmad R. [Universtity Duesseldorf, Medical Faculty, Department of Oral Surgery, Duesseldorf (Germany)
2012-05-15
To evaluate multidetector CT volumetry in the assessment of bone defect healing in comparison to histopathological findings in an animal model. In 16 mini-pigs, a circumscribed tibial bone defect was created. Multidetector CT (MDCT) of the tibia was performed on a 64-row scanner 42 days after the operation. The extent of bone healing was estimated quantitatively by MDCT volumetry using a commercially available software programme (syngo Volume, Siemens, Germany).The volume of the entire defect (including all pixels from -100 to 3,000 HU), the nonconsolidated areas (-100 to 500 HU), and areas of osseous consolidation (500 to 3,000 HU) were assessed and the extent of consolidation was calculated. Histomorphometry served as the reference standard. The extent of osseous consolidation in MDCT volumetry ranged from 19 to 92% (mean 65.4 {+-} 18.5%). There was a significant correlation between histologically visible newly formed bone and the extent of osseous consolidation on MDCT volumetry (r = 0.82, P < 0.0001). A significant negative correlation was detected between osseous consolidation on MDCT and histological areas of persisting defect (r = -0.9, P < 0.0001). MDCT volumetry is a promising tool for noninvasive monitoring of bone healing, showing excellent correlation with histomorphometry. (orig.)
Sarfraz Iqbal, M; Golsteijn, Laura; Öberg, Tomas; Sahlin, Ullrika; Papa, Ester; Kovarich, Simona; Huijbregts, Mark A J
2013-04-01
In cases in which experimental data on chemical-specific input parameters are lacking, chemical regulations allow the use of alternatives to testing, such as in silico predictions based on quantitative structure-property relationships (QSPRs). Such predictions are often given as point estimates; however, little is known about the extent to which uncertainties associated with QSPR predictions contribute to uncertainty in fate assessments. In the present study, QSPR-induced uncertainty in overall persistence (POV ) and long-range transport potential (LRTP) was studied by integrating QSPRs into probabilistic assessments of five polybrominated diphenyl ethers (PBDEs), using the multimedia fate model Simplebox. The uncertainty analysis considered QSPR predictions of the fate input parameters' melting point, water solubility, vapor pressure, organic carbon-water partition coefficient, hydroxyl radical degradation, biodegradation, and photolytic degradation. Uncertainty in POV and LRTP was dominated by the uncertainty in direct photolysis and the biodegradation half-life in water. However, the QSPRs developed specifically for PBDEs had a relatively low contribution to uncertainty. These findings suggest that the reliability of the ranking of PBDEs on the basis of POV and LRTP can be substantially improved by developing better QSPRs to estimate degradation properties. The present study demonstrates the use of uncertainty and sensitivity analyses in nontesting strategies and highlights the need for guidance when compounds fall outside the applicability domain of a QSPR.
Model selection for quantitative trait loci mapping in a full-sib family
Directory of Open Access Journals (Sweden)
Chunfa Tong
2012-01-01
Full Text Available Statistical methods for mapping quantitative trait loci (QTLs in full-sib forest trees, in which the number of alleles and linkage phase can vary from locus to locus, are still not well established. Previous studies assumed that the QTL segregation pattern was fixed throughout the genome in a full-sib family, despite the fact that this pattern can vary among regions of the genome. In this paper, we propose a method for selecting the appropriate model for QTL mapping based on the segregation of different types of markers and QTLs in a full-sib family. The QTL segregation patterns were classified into three types: test cross (1:1 segregation, F2 cross (1:2:1 segregation and full cross (1:1:1:1 segregation. Akaike's information criterion (AIC, the Bayesian information criterion (BIC and the Laplace-empirical criterion (LEC were used to select the most likely QTL segregation pattern. Simulations were used to evaluate the power of these criteria and the precision of parameter estimates. A Windows-based software was developed to run the selected QTL mapping method. A real example is presented to illustrate QTL mapping in forest trees based on an integrated linkage map with various segregation markers. The implications of this method for accurate QTL mapping in outbred species are discussed.
Toxicity mechanisms of the food contaminant citrinin: application of a quantitative yeast model.
Pascual-Ahuir, Amparo; Vanacloig-Pedros, Elena; Proft, Markus
2014-05-22
Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifically, oxidative stress responsive pathways via the transcription factors Yap1 and Skn7 are critically implied in the response to citrinin. Additionally, genes in various multidrug resistance transport systems are functionally involved in the resistance to citrinin. Our study identifies the antioxidant defense as a major physiological response in the case of citrinin. In general, our results show that the use of live cell gene expression reporters in yeast are a powerful tool to identify toxicity targets and detoxification mechanisms of a broad range of food contaminants relevant for human nutrition.
Tominaga, Akiyoshi; Gondo, Takahiro; Akashi, Ryo; Zheng, Shao-Hui; Arima, Susumu; Suzuki, Akihiro
2012-05-01
Many legumes form nitrogen-fixing root nodules. An elevation of nitrogen fixation in such legumes would have significant implications for plant growth and biomass production in agriculture. To identify the genetic basis for the regulation of nitrogen fixation, quantitative trait locus (QTL) analysis was conducted with recombinant inbred lines derived from the cross Miyakojima MG-20 × Gifu B-129 in the model legume Lotus japonicus. This population was inoculated with Mesorhizobium loti MAFF303099 and grown for 14 days in pods containing vermiculite. Phenotypic data were collected for acetylene reduction activity (ARA) per plant (ARA/P), ARA per nodule weight (ARA/NW), ARA per nodule number (ARA/NN), NN per plant, NW per plant, stem length (SL), SL without inoculation (SLbac-), shoot dry weight without inoculation (SWbac-), root length without inoculation (RLbac-), and root dry weight (RWbac-), and finally 34 QTLs were identified. ARA/P, ARA/NN, NW, and SL showed strong correlations and QTL co-localization, suggesting that several plant characteristics important for symbiotic nitrogen fixation are controlled by the same locus. QTLs for ARA/P, ARA/NN, NW, and SL, co-localized around marker TM0832 on chromosome 4, were also co-localized with previously reported QTLs for seed mass. This is the first report of QTL analysis for symbiotic nitrogen fixation activity traits.
Quantitative trait locus analysis of multiple agronomic traits in the model legume Lotus japonicus.
Gondo, Takahiro; Sato, Shusei; Okumura, Kenji; Tabata, Satoshi; Akashi, Ryo; Isobe, Sachiko
2007-07-01
The first quantitative trait locus (QTL) analysis of multiple agronomic traits in the model legume Lotus japonicus was performed with a population of recombinant inbred lines derived from Miyakojima MG-20 x Gifu B-129. Thirteen agronomic traits were evaluated in 2004 and 2005: traits of vegetative parts (plant height, stem thickness, leaf length, leaf width, plant regrowth, plant shape, and stem color), flowering traits (flowering time and degree), and pod and seed traits (pod length, pod width, seeds per pod, and seed mass). A total of 40 QTLs were detected that explained 5%-69% of total variation. The QTL that explained the most variation was that for stem color, which was detected in the same region of chromosome 2 in both years. Some QTLs were colocated, especially those for pod and seed traits. Seed mass QTLs were located at 5 locations that mapped to the corresponding genomic positions of equivalent QTLs in soybean, pea, chickpea, and mung bean. This study provides fundamental information for breeding of agronomically important legume crops.
Caruso, Enrico; Gariboldi, Marzia; Sangion, Alessandro; Gramatica, Paola; Banfi, Stefano
2017-02-01
Here we report the synthesis of eleven new BODIPYs (14-24) characterized by the presence of an aromatic ring on the 8 (meso) position and of iodine atoms on the pyrrolic 2,6 positions. These molecules, together with twelve BODIPYs already reported by us (1-12), represent a large panel of BODIPYs showing different atoms or groups as substituent of the aromatic moiety. Two physico-chemical features ((1)O2 generation rate and lipophilicity), which can play a fundamental role in the outcome as photosensitizers, have been studied. The in vitro photo-induced cell-killing efficacy of 23 PSs was studied on the SKOV3 cell line treating the cells for 24h in the dark then irradiating for 2h with a green LED device (fluence 25.2J/cm(2)). The cell-killing efficacy was assessed with the MTT test and compared with that one of meso un-substituted compound (13). In order to understand the possible effect of the substituents, a predictive quantitative structure-activity relationship (QSAR) regression model, based on theoretical holistic molecular descriptors, was developed. The results clearly indicate that the presence of an aromatic ring is fundamental for an excellent photodynamic response, whereas the electronic effects and the position of the substituents on the aromatic ring do not influence the photodynamic efficacy. Copyright © 2017 Elsevier B.V. All rights reserved.
Baker, Robert L; Leong, Wen Fung; Brock, Marcus T; Markelz, R J Cody; Covington, Michael F; Devisetty, Upendra K; Edwards, Christine E; Maloof, Julin; Welch, Stephen; Weinig, Cynthia
2015-10-01
Improved predictions of fitness and yield may be obtained by characterizing the genetic controls and environmental dependencies of organismal ontogeny. Elucidating the shape of growth curves may reveal novel genetic controls that single-time-point (STP) analyses do not because, in theory, infinite numbers of growth curves can result in the same final measurement. We measured leaf lengths and widths in Brassica rapa recombinant inbred lines (RILs) throughout ontogeny. We modeled leaf growth and allometry as function valued traits (FVT), and examined genetic correlations between these traits and aspects of phenology, physiology, circadian rhythms and fitness. We used RNA-seq to construct a SNP linkage map and mapped trait quantitative trait loci (QTL). We found genetic trade-offs between leaf size and growth rate FVT and uncovered differences in genotypic and QTL correlations involving FVT vs STPs. We identified leaf shape (allometry) as a genetic module independent of length and width and identified selection on FVT parameters of development. Leaf shape is associated with venation features that affect desiccation resistance. The genetic independence of leaf shape from other leaf traits may therefore enable crop optimization in leaf shape without negative effects on traits such as size, growth rate, duration or gas exchange.
Toxicity Mechanisms of the Food Contaminant Citrinin: Application of a Quantitative Yeast Model
Directory of Open Access Journals (Sweden)
Amparo Pascual-Ahuir
2014-05-01
Full Text Available Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifically, oxidative stress responsive pathways via the transcription factors Yap1 and Skn7 are critically implied in the response to citrinin. Additionally, genes in various multidrug resistance transport systems are functionally involved in the resistance to citrinin. Our study identifies the antioxidant defense as a major physiological response in the case of citrinin. In general, our results show that the use of live cell gene expression reporters in yeast are a powerful tool to identify toxicity targets and detoxification mechanisms of a broad range of food contaminants relevant for human nutrition.
A quantitative microbiological exposure assessment model for Bacillus cereus in REPFEDs.
Daelman, Jeff; Membré, Jeanne-Marie; Jacxsens, Liesbeth; Vermeulen, An; Devlieghere, Frank; Uyttendaele, Mieke
2013-09-16
One of the pathogens of concern in refrigerated and processed foods of extended durability (REPFED) is psychrotrophic Bacillus cereus, because of its ability to survive pasteurisation and grow at low temperatures. In this study a quantitative microbiological exposure assessment (QMEA) of psychrotrophic B. cereus in REPFEDs is presented. The goal is to quantify (i) the prevalence and concentration of B. cereus during production and shelf life, (ii) the number of packages with potential emetic toxin formation and (iii) the impact of different processing steps and consumer behaviour on the exposure to B. cereus from REPFEDs. The QMEA comprises the entire production and distribution process, from raw materials over pasteurisation and up to the moment it is consumed or discarded. To model this process the modular process risk model (MPRM) was used (Nauta, 2002). The product life was divided into nine modules, each module corresponding to a basic process: (1) raw material contamination, (2) cross contamination during handling, (3) inactivation during preparation, (4) growth during intermediate storage, (5) partitioning of batches in portions, (6) mixing portions to create the product, (7) recontamination during assembly and packaging, (8) inactivation during pasteurisation and (9) growth during shelf life. Each of the modules was modelled and built using a combination of newly gathered and literature data, predictive models and expert opinions. Units (batch/portion/package) with a B. cereus concentration of 10(5)CFU/g or more were considered 'risky' units. Results show that the main drivers of variability and uncertainty are consumer behaviour, strain variability and modelling error. The prevalence of B. cereus in the final products is estimated at 48.6% (±0.01%) and the number of packs with too high B. cereus counts at the moment of consumption is estimated at 4750 packs per million (0.48%). Cold storage at retail and consumer level is vital in limiting the exposure
A quantitative analysis to objectively appraise drought indicators and model drought impacts
Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.
2016-07-01
coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.
A quantitative analysis to objectively appraise drought indicators and model drought impacts
Directory of Open Access Journals (Sweden)
S. Bachmair
2015-09-01
. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis, and highlights the important role that quantitative analysis with impacts data can have in providing "ground truth" for drought indicators alongside more traditional stakeholder-led approaches.
A quantitative analysis to objectively appraise drought indicators and model drought impacts
Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.
2015-09-01
also provided insights into the EDII, in particular highlighting drought events where missing impact reports reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis, and highlights the important role that quantitative analysis with impacts data can have in providing "ground truth" for drought indicators alongside more traditional stakeholder-led approaches.
Comparative Assessment of DFT Performances in Ru- and Rh-Promoted σ-Bond Activations.
Sun, Yuanyuan; Hu, Lianrui; Chen, Hui
2015-04-14
In this work, the performances of 19 density functional theory (DFT) methods are calibrated comparatively on Ru- and Rh-promoted σ-bond (C-H, O-H, and H-H) activations. DFT calibration reference is generated from explicitly correlated coupled cluster CCSD(T)-F12 calculations, and the 4s4p core-valence correlation effect of the two 4d platinum group transition metals is also included. Generally, the errors of DFT methods for calculating energetics of Ru-/Rh-mediated reactions appear to correlate more with the magnitude of energetics itself than other factors such as metal identity. For activation energy calculations, the best performing functionals for both Ru and Rh systems are MN12SX DFT empirical dispersion correction on the performance of the DFT methods is beneficial for most density functionals tested in this work, reducing their MUDs to different extents. After including empirical dispersion correction, ωB97XD, B3LYP-D3, and CAM-B3LYP-D3 (PBE0-D3, B3LYP-D3, and ωB97XD) are the three best performing DFs for activation energy (reaction energy) calculations, from which B3LYP-D3 and ωB97XD can notably be recommended uniformly for both the reaction energy and reaction barrier calculations. The good performance of B3LYP-D3 in quantitative description of the energetic trends further adds value to B3LYP-D3 and singles this functional out as a reasonable choice in the Ru/Rh-promoted σ-bond activation processes.
Excitation energies from ensemble DFT
Borgoo, Alex; Teale, Andy M.; Helgaker, Trygve
2015-12-01
We study the evaluation of the Gross-Oliveira-Kohn expression for excitation energies E1-E0=ɛ1-ɛ0+∂E/xc,w[ρ] ∂w | ρ =ρ0. This expression gives the difference between an excitation energy E1 - E0 and the corresponding Kohn-Sham orbital energy difference ɛ1 - ɛ0 as a partial derivative of the exchange-correlation energy of an ensemble of states Exc,w[ρ]. Through Lieb maximisation, on input full-CI density functions, the exchange-correlation energy is evaluated accurately and the partial derivative is evaluated numerically using finite difference. The equality is studied numerically for different geometries of the H2 molecule and different ensemble weights. We explore the adiabatic connection for the ensemble exchange-correlation energy. The latter may prove useful when modelling the unknown weight dependence of the exchange-correlation energy.
A DFT study of temperature dependent dissociation mechanism of HF in HF(H2O)7 cluster
Indian Academy of Sciences (India)
Swatantra K Yadav; Hirdyesh Mishra; Ashwani K Tiwari
2015-10-01
We report a Density Functional Theoretical (DFT) study of dissociation of Hydrogen Fluoride (HF) in HF(H2O)7 cluster, using B3LYP functional and empirical exchange correlation functional M06-2X along with 6-31+G(d,p) basis set. Dissociation constant, KRP, of HF dissociation and pKa values of HF in cluster at various temperatures have been reported. It has been found that both KRP and pKa are highly dependent on temperature. The variation of pKa with temperature suggests that HF is strong acid at lower temperatures. Our study also reveals that HF is a stronger acid in water cluster than in bulk water. Further, the results obtained by DFT calculations have been compared with the earlier reported results obtained from Monte Carlo (MC) simulation. It is found that DFT results are qualitatively consistent with the results of MC simulation but quantitatively different.
Hirvonen, Petri; Ervasti, Mikko M.; Fan, Zheyong; Jalalvand, Morteza; Seymour, Matthew; Vaez Allaei, S. Mehdi; Provatas, Nikolas; Harju, Ari; Elder, Ken R.; Ala-Nissila, Tapio
2016-07-01
We extend the phase field crystal (PFC) framework to quantitative modeling of polycrystalline graphene. PFC modeling is a powerful multiscale method for finding the ground state configurations of large realistic samples that can be further used to study their mechanical, thermal, or electronic properties. By fitting to quantum-mechanical density functional theory (DFT) calculations, we show that the PFC approach is able to predict realistic formation energies and defect structures of grain boundaries. We provide an in-depth comparison of the formation energies between PFC, DFT, and molecular dynamics (MD) calculations. The DFT and MD calculations are initialized using atomic configurations extracted from PFC ground states. Finally, we use the PFC approach to explicitly construct large realistic polycrystalline samples and characterize their properties using MD relaxation to demonstrate their quality.
Derksen, R.; Scheffer, G.J.; Hoeven, J.G. van der
2006-01-01
Traditional theories of acid-base balance are based on the Henderson-Hasselbalch equation to calculate proton concentration. The recent revival of quantitative acid-base physiology using the Stewart model has increased our understanding of complicated acid-base disorders, but has also led to several
Korstanje, Ron; Desai, Jigar; Lazar, Gloria; King, Benjamin; Rollins, Jarod; Spurr, Melissa; Joseph, Jamie; Kadambi, Sindhuja; Li, Yang; Cherry, Allison; Matteson, Paul G.; Paigen, Beverly; Millonig, James H.
2008-01-01
Korstanje R, Desai J, Lazar G, King B, Rollins J, Spurr M, Joseph J, Kadambi S, Li Y, Cherry A, Matteson PG, Paigen B, Millonig JH. Quantitative trait loci affecting phenotypic variation in the vacuolated lens mouse mutant, a multigenic mouse model of neural tube defects. Physiol Genomics 35: 296-30
Wolusky, G. Anthony
2016-01-01
This quantitative study used a web-based questionnaire to assess the attitudes and perceptions of online and hybrid faculty towards student-centered asynchronous virtual teamwork (AVT) using the technology acceptance model (TAM) of Davis (1989). AVT is online student participation in a team approach to problem-solving culminating in a written…
MODIS volcanic ash retrievals vs FALL3D transport model: a quantitative comparison
Corradini, S.; Merucci, L.; Folch, A.
2010-12-01
Satellite retrievals and transport models represents the key tools to monitor the volcanic clouds evolution. Because of the harming effects of fine ash particles on aircrafts, the real-time tracking and forecasting of volcanic clouds is key for aviation safety. Together with the security reasons also the economical consequences of a disruption of airports must be taken into account. The airport closures due to the recent Icelandic Eyjafjöll eruption caused millions of passengers to be stranded not only in Europe, but across the world. IATA (the International Air Transport Association) estimates that the worldwide airline industry has lost a total of about 2.5 billion of Euro during the disruption. Both security and economical issues require reliable and robust ash cloud retrievals and trajectory forecasting. The intercomparison between remote sensing and modeling is required to assure precise and reliable volcanic ash products. In this work we perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands operating in the VIS-TIR spectral range and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 micron have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. Three MODIS images collected the October 28, 29 and 30 on Mt. Etna volcano during the 2002 eruption have been considered as test cases. The results show a general good agreement between the retrieved and the modeled volcanic clouds in the first 300 km from the vents. Even if the
Directory of Open Access Journals (Sweden)
Igor Shuryak
Full Text Available Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1 bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA; (2 fungi isolated from the Chernobyl nuclear-power plant (Ukraine buildings after the accident; (3 yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s among biologically-plausible alternatives. Our analysis suggests the following: (1 Both radionuclides and co-occurring chemical contaminants (e.g. NO2 are important for explaining microbial responses to radioactive contamination. (2 Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3 The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4 Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1 the most severe effects (e.g. extinction on microbial populations may occur when unfavorable environmental
Fanchiang, Christine
Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of
Directory of Open Access Journals (Sweden)
Lili Jiang
2010-04-01
Full Text Available Escherichia coli chemotactic motion in spatiotemporally varying environments is studied by using a computational model based on a coarse-grained description of the intracellular signaling pathway dynamics. We find that the cell's chemotaxis drift velocity v(d is a constant in an exponential attractant concentration gradient [L] proportional, variantexp(Gx. v(d depends linearly on the exponential gradient G before it saturates when G is larger than a critical value G(C. We find that G(C is determined by the intracellular adaptation rate k(R with a simple scaling law: G(C infinity k(1/2(R. The linear dependence of v(d on G = d(ln[L]/dx directly demonstrates E. coli's ability in sensing the derivative of the logarithmic attractant concentration. The existence of the limiting gradient G(C and its scaling with k(R are explained by the underlying intracellular adaptation dynamics and the flagellar motor response characteristics. For individual cells, we find that the overall average run length in an exponential gradient is longer than that in a homogeneous environment, which is caused by the constant kinase activity shift (decrease. The forward runs (up the gradient are longer than the backward runs, as expected; and depending on the exact gradient, the (shorter backward runs can be comparable to runs in a spatially homogeneous environment, consistent with previous experiments. In (spatial ligand gradients that also vary in time, the chemotaxis motion is damped as the frequency omega of the time-varying spatial gradient becomes faster than a critical value omega(c, which is controlled by the cell's chemotaxis adaptation rate k(R. Finally, our model, with no adjustable parameters, agrees quantitatively with the classical capillary assay experiments where the attractant concentration changes both in space and time. Our model can thus be used to study E. coli chemotaxis behavior in arbitrary spatiotemporally varying environments. Further experiments are
Institute of Scientific and Technical Information of China (English)
林畅松; 张燕梅; 李思田; 刘景彦; 仝志刚; 丁孝忠; 李喜臣
2002-01-01
The stretching process of some Tertiary rift basins in eastern China is characterized by multiphase rifting. A multiple instantaneous uniform stretching model is proposed in this paper to simulate the formation of the basins as the rifting process cannot be accurately described by a simple (one episode) stretching model. The study shows that the multiphase stretching model, combined with the back-stripping technique, can be used to reconstruct the subsidence history and the stretching process of the lithosphere, and to evaluate the depth to the top of the asthenosphere and the deep thermal evolution of the basins. The calculated results obtained by applying the quantitative model to the episodic rifting process of the Tertiary Qiongdongnan and Yinggehai basins in the South China Sea are in agreement with geophysical data and geological observations. This provides a new method for quantitative evaluation of the geodynamic process of multiphase rifting occurring during the Tertiary in eastern China.
Directory of Open Access Journals (Sweden)
Eric S. Haag
2016-12-01
Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.
Fischer, A.; Hoffmann, K.-H.
2004-03-01
In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.
Pike, Richard J.
2002-01-01
Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement
Quantitative Simulation of Granular Collapse Experiments with Visco-Plastic Models
Mangeney, A.; Ionescu, I. R.; Bouchut, F.; Roche, O.
2014-12-01
One of the key issues in landslide modeling is to define the appropriate rheological behavior of these natural granular flows. In particular the description of the static and of the flowing states of granular media is still an open issue. This plays a crucial role in erosion/deposition processes. A first step to address this issue is to derive models able to reproduce laboratory experiments of granular flows. We propose here a mechanical and numerical model of dry granular flows that quantitatively well reproduces granular column collapse over inclined planes, with rheological parameters directly derived from the laboratory experiments. We reformulate the so-called μ(I) rheology proposed by Jop et al. (2006) where I is the so-called inertial number in the framework of Drucker-Prager plasticity with yield stress and a viscosity η(||D||, p) depending on both the pressure p and the norm of the strain rate tensor ||D||. The resulting dynamic viscosity varies from very small values near the free surface and near the front to 1.5 Pa.s within the quasi-static zone. We show that taking into account a constant mean viscosity during the flow (η = 1 Pa.s here) provides results very similar to those obtained with the variable viscosity deduced from the μ(I) rheology, while significantly reducing the computational cost. This has important implication for application to real landslides and rock avalanches. The numerical results show that the flow is essentially located in a surface layer behind the front, while the whole granular material is flowing near the front where basal sliding occurs. The static/flowing interface changes as a function of space and time, in good agreement with experimental observations. Heterogeneities are observed within the flow with low and high pressure zones, localized small upward velocity zones and vortices near the transition between the flowing and static grains. These instabilities create 'sucking zones' and have some characteristics similar
Energy Technology Data Exchange (ETDEWEB)
Yang, Jinzhong, E-mail: jyang4@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhang, Lifei; Balter, Peter; Court, Laurence E. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, X. Allen [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Dong, Lei [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Scripps Proton Therapy Center, San Diego, California (United States)
2014-05-01
Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively
Improved DFT Potential Energy Surfaces via Improved Densities.
Kim, Min-Cheol; Park, Hansol; Son, Suyeon; Sim, Eunji; Burke, Kieron
2015-10-01
Density-corrected DFT is a method that cures several failures of self-consistent semilocal DFT calculations by using a more accurate density instead. A novel procedure employs the Hartree-Fock density to bonds that are more severely stretched than ever before. This substantially increases the range of accurate potential energy surfaces obtainable from semilocal DFT for many heteronuclear molecules. We show that this works for both neutral and charged molecules. We explain why and explore more difficult cases, for example, CH(+), where density-corrected DFT results are even better than sophisticated methods like CCSD. We give a simple criterion for when DC-DFT should be more accurate than self-consistent DFT that can be applied for most cases.
Wang, Ying; Yang, Xianhai; Wang, Juying; Cong, Yi; Mu, Jingli; Jin, Fei
2016-05-05
In the present study, quantitative structure-activity relationship (QSAR) techniques based on toxicity mechanism and density functional theory (DFT) descriptors were adopted to develop predictive models for the toxicity of alkylated and parent aromatic hydrocarbons to Vibrio fischeri. The acute toxicity data of 17 aromatic hydrocarbons from both literature and our experimental results were used to construct QSAR models by partial least squares (PLS) analysis. With consideration of the toxicity process, the partition of aromatic hydrocarbons between water phase and lipid phase and their interaction with the target biomolecule, the optimal QSAR model was obtained by introducing aqueous freely dissolved concentration. The high statistical values of R(2) (0.956) and Q(CUM)(2) (0.942) indicated that the model has good goodness-of-fit, robustness and internal predictive power. The average molecular polarizability (α) and several selected thermodynamic parameters reflecting the intermolecular interactions played important roles in the partition of aromatic hydrocarbons between the water phase and biomembrane. Energy of the highest occupied molecular orbital (E(HOMO)) was the most influential descriptor which dominated the toxicity of aromatic hydrocarbons through the electron-transfer reaction with biomolecules. The results demonstrated that the adoption of freely dissolved concentration instead of nominal concentration was a beneficial attempt for toxicity QSAR modeling of hydrophobic organic chemicals.
Mendlinger, Sheryl; Cwikel, Julie
2008-02-01
A double helix spiral model is presented which demonstrates how to combine qualitative and quantitative methods of inquiry in an interactive fashion over time. Using findings on women's health behaviors (e.g., menstruation, breast-feeding, coping strategies), we show how qualitative and quantitative methods highlight the theory of knowledge acquisition in women's health decisions. A rich data set of 48 semistructured, in-depth ethnographic interviews with mother-daughter dyads from six ethnic groups (Israeli, European, North African, Former Soviet Union [FSU], American/Canadian, and Ethiopian), plus seven focus groups, provided the qualitative sources for analysis. This data set formed the basis of research questions used in a quantitative telephone survey of 302 Israeli women from the ages of 25 to 42 from four ethnic groups. We employed multiple cycles of data analysis from both data sets to produce a more detailed and multidimensional picture of women's health behavior decisions through a spiraling process.
Catalytic activity trends of CO oxidation – A DFT study
DEFF Research Database (Denmark)
Jiang, Tao
eigenmodes and eigenvalues, and improving algorithms for geometry optimization in electronic structure calculations. The catalytic activity of gold nanoparticles has received wide attention since the discovery of their activity on CO oxidation by Professor Haruta in 1987. By using density functional theory...... (DFT) and microkinetic modeling, we study CO oxidation reaction pathway on a number of transition and noble metals, i.e. Au, Ag, Pt, Pd, Cu, Ni, Rh, Ru, with different surface morphologies, close packed surfaces, stepped surfaces, kinked surfaces, as well as 12␣atom corner model of a larger...... nanoparticle. The upper bound of the catalytic activity (Sabatier activity) is then obtained and shows that at room temperature gold nanoparticle is the best catalyst for CO oxidation among all the metals considered. Under high temperature reaction condition, however, close packed Pt surface become most...
Energy Technology Data Exchange (ETDEWEB)
Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)
2014-02-15
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved
Directory of Open Access Journals (Sweden)
Jingpei Wang
2016-01-01
Full Text Available Varied P2P trust models have been proposed recently; it is necessary to develop an effective method to evaluate these trust models to resolve the commonalities (guiding the newly generated trust models in theory and individuality (assisting a decision maker in choosing an optimal trust model to implement in specific context issues. A new method for analyzing and comparing P2P trust models based on hierarchical parameters quantization in the file downloading scenarios is proposed in this paper. Several parameters are extracted from the functional attributes and quality feature of trust relationship, as well as requirements from the specific network context and the evaluators. Several distributed P2P trust models are analyzed quantitatively with extracted parameters modeled into a hierarchical model. The fuzzy inferring method is applied to the hierarchical modeling of parameters to fuse the evaluated values of the candidate trust models, and then the relative optimal one is selected based on the sorted overall quantitative values. Finally, analyses and simulation are performed. The results show that the proposed method is reasonable and effective compared with the previous algorithms.
Moynihan, Glenn; Teobaldi, Gilberto; O'Regan, David D.
2016-12-01
In approximate density-functional theory (DFT), the self-interaction error is an electron delocalization anomaly associated with underestimated insulating gaps. It exhibits a predominantly quadratic energy-density curve that is amenable to correction using efficient, constraint-resembling methods such as DFT + Hubbard U (DFT+U ). Constrained DFT (cDFT) enforces conditions on DFT exactly, by means of self-consistently optimized Lagrange multipliers, and while its use to automate error corrections is a compelling possibility, we show that it is limited by a fundamental incompatibility with constraints beyond linear order. We circumvent this problem by utilizing separate linear and quadratic correction terms, which may be interpreted either as distinct constraints, each with its own Hubbard U type Lagrange multiplier, or as the components of a generalized DFT+U functional. The latter approach prevails in our tests on a model one-electron system, H2+ , in that it readily recovers the exact total energy while symmetry-preserving pure constraints fail to do so. The generalized DFT+U functional moreover enables the simultaneous correction of the total energy and ionization potential, or the correction of either together with the enforcement of Koopmans' condition. For the latter case, we outline a practical, approximate scheme by which the required pair of Hubbard parameters, denoted as U1 and U2, may be calculated from first principles.
A conceptual DFT approach towards analysing toxicity
Indian Academy of Sciences (India)
U Sarkar; D R Roy; P K Chattaraj; R Parthasarathi; J Padmanabhan; V Subramanian
2005-09-01
The applicability of DFT-based descriptors for the development of toxicological structure-activity relationships is assessed. Emphasis in the present study is on the quality of DFT-based descriptors for the development of toxicological QSARs and, more specifically, on the potential of the electrophilicity concept in predicting toxicity of benzidine derivatives and the series of polyaromatic hydrocarbons (PAH) expressed in terms of their biological activity data (50). First, two benzidine derivatives, which act as electron-donating agents in their interactions with biomolecules are considered. Overall toxicity in general and the most probable site of reactivity in particular are effectively described by the global and local electrophilicity parameters respectively. Interaction of two benzidine derivatives with nucleic acid (NA) bases/selected base pairs is determined using Parr’s charge transfer formula. The experimental biological activity data (50) for the family of PAH, namely polychlorinated dibenzofurans (PCDF), polyhalogenated dibenzo--dioxins (PHDD) and polychlorinated biphenyls (PCB) are taken as dependent variables and the HF energy (), along with DFT-based global and local descriptors, viz., electrophilicity index () and local electrophilic power (+) respectively are taken as independent variables. Fairly good correlation is obtained showing the significance of the selected descriptors in the QSAR on toxins that act as electron acceptors in the presence of biomolecules. Effects of population analysis schemes in the calculation of Fukui functions as well as that of solvation are probed. Similarly, some electron-donor aliphatic amines are studied in the present work. We see that global and local electrophilicities along with the HF energy are adequate in explaining the toxicity of several substances
Seamster, Pamela E; Loewenberg, Michael; Pascal, Jennifer; Chauviere, Arnaud; Gonzales, Aaron; Cristini, Vittorio; Bearer, Elaine L
2013-01-01
The kinesins have long been known to drive microtubule-based transport of sub-cellular components, yet the mechanisms of their attachment to cargo remain a mystery. Several different cargo-receptors have been proposed based on their in vitro binding affinities to kinesin-1. Only two of these—phosphatidyl inositol, a negatively charged lipid, and the carboxyl terminus of the amyloid precursor protein (APP-C), a trans-membrane protein—have been reported to mediate motility in living systems. A major question is how these many different cargo, receptors and motors interact to produce the complex choreography of vesicular transport within living cells. Here we describe an experimental assay that identifies cargo–motor receptors by their ability to recruit active motors and drive transport of exogenous cargo towards the synapse in living axons. Cargo is engineered by derivatizing the surface of polystyrene fluorescent nanospheres (100 nm diameter) with charged residues or with synthetic peptides derived from candidate motor receptor proteins, all designed to display a terminal COOH group. After injection into the squid giant axon, particle movements are imaged by laser-scanning confocal time-lapse microscopy. In this report we compare the motility of negatively charged beads with APP-C beads in the presence of glycine-conjugated non-motile beads using new strategies to measure bead movements. The ensuing quantitative analysis of time-lapse digital sequences reveals detailed information about bead movements: instantaneous and maximum velocities, run lengths, pause frequencies and pause durations. These measurements provide parameters for a mathematical model that predicts the spatiotemporal evolution of distribution of the two different types of bead cargo in the axon. The results reveal that negatively charged beads differ from APP-C beads in velocity and dispersion, and predict that at long time points APP-C will achieve greater progress towards the presynaptic
Seamster, Pamela E.; Loewenberg, Michael; Pascal, Jennifer; Chauviere, Arnaud; Gonzales, Aaron; Cristini, Vittorio; Bearer, Elaine L.
2012-10-01
The kinesins have long been known to drive microtubule-based transport of sub-cellular components, yet the mechanisms of their attachment to cargo remain a mystery. Several different cargo-receptors have been proposed based on their in vitro binding affinities to kinesin-1. Only two of these—phosphatidyl inositol, a negatively charged lipid, and the carboxyl terminus of the amyloid precursor protein (APP-C), a trans-membrane protein—have been reported to mediate motility in living systems. A major question is how these many different cargo, receptors and motors interact to produce the complex choreography of vesicular transport within living cells. Here we describe an experimental assay that identifies cargo-motor receptors by their ability to recruit active motors and drive transport of exogenous cargo towards the synapse in living axons. Cargo is engineered by derivatizing the surface of polystyrene fluorescent nanospheres (100 nm diameter) with charged residues or with synthetic peptides derived from candidate motor receptor proteins, all designed to display a terminal COOH group. After injection into the squid giant axon, particle movements are imaged by laser-scanning confocal time-lapse microscopy. In this report we compare the motility of negatively charged beads with APP-C beads in the presence of glycine-conjugated non-motile beads using new strategies to measure bead movements. The ensuing quantitative analysis of time-lapse digital sequences reveals detailed information about bead movements: instantaneous and maximum velocities, run lengths, pause frequencies and pause durations. These measurements provide parameters for a mathematical model that predicts the spatiotemporal evolution of distribution of the two different types of bead cargo in the axon. The results reveal that negatively charged beads differ from APP-C beads in velocity and dispersion, and predict that at long time points APP-C will achieve greater progress towards the presynaptic
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (pchemotherapy.
Mariani, M.; Connor, S. E.; Theuerkauf, M.; Kuneš, P.; Fletcher, M.-S.
2016-12-01
Reconstructing past vegetation abundance and land-cover changes through time has important implications in land management and climate modelling. To date palaeovegetation reconstructions in Australia have been limited to qualitative or semi-quantitative inferences from pollen data. Testing pollen dispersal models constitutes a crucial step in developing quantitative past vegetation and land cover reconstructions. Thus far, the application of quantitative pollen dispersal models has been restricted to regions dominated by wind-pollinated plants (e.g. Europe) and their performance in a landscape dominated by animal-pollinated plant taxa is still unexplored. Here we test, for the first time in Australia, two well-known pollen dispersal models to assess their performance in the wind- and animal-pollinated vegetation mosaics of western Tasmania. We focus on a mix of wind- (6 taxa) and animal- (7 taxa) pollinated species that comprise the most common pollen types and key representatives of the dominant vegetation formations. Pollen Productivity Estimates and Relevant Source Area of Pollen obtained using Lagrangian Stochastic turbulent simulations appear to be more realistic when compared to the results from the widely used Gaussian Plume Model.
Energy Technology Data Exchange (ETDEWEB)
Xiao, Zhihua [The Hong Kong Polytechnic University, Shenzhen Research Institute, Shenzhen (China); PolyU Base (Shenzhen) Limited, Shenzhen (China); Department of Mechanical Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong (China); Hao, Mingjun [The Hong Kong Polytechnic University, Shenzhen Research Institute, Shenzhen (China); Department of Mechanical Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong (China); Guo, Xianghua [State Key Laboratory of Explosion and Safety Science, Beijing Institute of Technology, Beijing 100081 (China); Tang, Guoyi [Advanced Materials Institute, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China); Shi, San-Qiang, E-mail: mmsqshi@polyu.edu.hk [The Hong Kong Polytechnic University, Shenzhen Research Institute, Shenzhen (China); PolyU Base (Shenzhen) Limited, Shenzhen (China); Department of Mechanical Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong (China)
2015-04-15
A quantitative free energy functional developed in Part I (Shi and Xiao, 2014 [1]) was applied to model temperature dependent δ-hydride precipitation in zirconium in real time and real length scale. At first, the effect of external tensile load on reorientation of δ-hydrides was calibrated against experimental observations, which provides a modification factor for the strain energy in free energy formulation. Then, two types of temperature-related problems were investigated. In the first type, the effect of temperature transient was studied by cooling the Zr–H system at different cooling rates from high temperature while an external tensile stress was maintained. At the end of temperature transients, the average hydride size as a function of cooling rate was compared to experimental data. In the second type, the effect of temperature gradients was studied in a one or two dimensional temperature field. Different boundary conditions were applied. The results show that the hydride precipitation concentrated in low temperature regions and that it eventually led to the formation of hydride blisters in zirconium. A brief discussion on how to implement the hysteresis of hydrogen solid solubility on hydride precipitation and dissolution in the developed phase field scheme is also presented.
van de Streek, Jacco; Neumann, Marcus A
2014-12-01
In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom.
DFT calculations with the exact functional
Burke, Kieron
2014-03-01
I will discuss several works in which we calculate the exact exchange-correlation functional of density functional theory, mostly using the density-matrix renormalization group method invented by Steve White, our collaborator. We demonstrate that a Mott-Hubard insulator is a band metal. We also perform Kohn-Sham DFT calculations with the exact functional and prove that a simple algoritm always converges. But we find convergence becomes harder as correlations get stronger. An example from transport through molecular wires may also be discussed. Work supported by DOE grant DE-SC008696.
Chen, Feifei; Wang, Yujiao; Xie, Xiaomei; Chen, Meng; Li, Wei
2014-07-15
A comparative study of DFT and DFT-D3 has been carried out on the UV-vis absorption of permethrin, cypermethrin and their β-cyclodextrin inclusion complexes. The TDDFT method with PCM (or COSMO) model was adopted and B3LYP, BLYP and BLYP-D3 functionals were selected. Comparing the simulated spectra with experimental one, we can notice that pure BLYP functional can better reproduce the UV-vis spectra than hybrid B3LYP, but empirical dispersion corrections BLYP-D3 has better performance than BLYP. BLYP-D3 calculations reveal that the main absorption bands of permethrin and cypermethrin arise from the π→π(*) transition, after encapsulated by β-CD to form inclusion complexes, the host-guest intermolecular charge transfer (ICT) makes the main absorption bands to be changed significantly in wavelength and intensity.
DEFF Research Database (Denmark)
Vinggaard, Annemarie; Niemelä, Jay Russell; Wedebye, Eva Bay;
2008-01-01
We have screened 397 chemicals for human androgen receptor (AR) antagonism by a sensitive reporter gene assay to generate data for the development of a quantitative structure-activity relationship (QSAR) model. A total of 523 chemicals comprising data on 292 chemicals from our laboratory and data...... by the synthetic androgen R1881. The MultiCASE expert system was used to construct a QSAR model for AR antagonizing potential. A "5 Times, 2-Fold 50% Cross Validation" of the model showed a sensitivity of 64%, a specificity of 84%, and a concordance of 76%. Data for 102 chemicals were generated for an external...... validation of the model resulting in a sensitivity of 57%, a specificity of 98%, and a concordance of 92% of the model. The model was run on a set of 176103 chemicals, and 47% were within the domain of the model. Approximately 8% of chemicals was predicted active for AR antagonism. We conclude...
Gramatica, Paola; Papa, Ester; Marrocchi, Assunta; Minuti, Lucio; Taticchi, Aldo
2007-03-01
Various polycyclic aromatic hydrocarbons (PAHs), ubiquitous environmental pollutants, are recognized mutagens and carcinogens. A homogeneous set of mutagenicity data (TA98 and TA100,+S9) for 32 benzocyclopentaphenanthrenes/chrysenes was modeled by the quantitative structure-activity relationship classification methods k-nearest neighbor and classification and regression tree, using theoretical holistic molecular descriptors. Genetic algorithm provided the selection of the best subset of variables for modeling mutagenicity. The models were validated by leave-one-out and leave-50%-out approaches and have good performance, with sensitivity and specificity ranges of 90-100%. Mutagenicity assessment for these PAHs requires only a few theoretical descriptors of their molecular structure.
Sostelly, Alexandre; Payen, Léa; Guitton, Jérôme; Di Pietro, Attilio; Falson, Pierre; Honorat, Mylène; Boumendjel, Ahcène; Gèze, Annabelle; Freyer, Gilles; Tod, Michel
2014-04-01
ATP-Binding Cassette transporters such as ABCG2 confer resistance to various anticancer drugs including irinotecan and its active metabolite, SN38. Early quantitative evaluation of efflux transporter inhibitors-cytotoxic combination requires quantitative drug-disease models. A proof-of-concept study has been carried out for studying the effect of a new ABCG2 transporter inhibitor, MBLI87 combined to irinotecan in mice xenografted with cells overexpressing ABCG2. Mice were treated with irinotecan alone or combined to MBLI87, and tumour size was periodically measured. To model those data, a tumour growth inhibition model was developed. Unperturbed tumour growth was modelled using Simeoni's model. Drug effect kinetics was accounted for by a Kinetic-Pharmacodynamic approach. Effect of inhibitor was described with a pharmacodynamic interaction model where inhibitor enhances activity of cytotoxic. This model correctly predicted tumour growth dynamics from our study. MBLI87 increased irinotecan potency by 20% per μmol of MBLI87. This model retains enough complexity to simultaneously describe tumour growth and effect of this type of drug combination. It can thus be used as a template to early evaluate efflux transporter inhibitors in-vivo.
Fahmi, Rachid; Eck, Brendan L.; Levi, Jacob; Fares, Anas; Dhanantwari, Amar; Vembar, Mani; Bezerra, Hiram G.; Wilson, David L.
2016-03-01
We optimized and evaluated dynamic myocardial CT perfusion (CTP) imaging on a prototype spectral detector CT (SDCT) scanner. Simultaneous acquisition of energy sensitive projections on the SDCT system enabled projection-based material decomposition, which typically performs better than image-based decomposition required by some other system designs. In addition to virtual monoenergetic, or keV images, the SDCT provided conventional (kVp) images, allowing us to compare and contrast results. Physical phantom measurements demonstrated linearity of keV images, a requirement for quantitative perfusion. Comparisons of kVp to keV images demonstrated very significant reductions in tell-tale beam hardening (BH) artifacts in both phantom and pig images. In phantom images, consideration of iodine contrast to noise ratio and small residual BH artifacts suggested optimum processing at 70 keV. The processing pipeline for dynamic CTP measurements included 4D image registration, spatio-temporal noise filtering, and model-independent singular value decomposition deconvolution, automatically regularized using the L-curve criterion. In normal pig CTP, 70 keV perfusion estimates were homogeneous throughout the myocardium. At 120 kVp, flow was reduced by more than 20% on the BH-hypo-enhanced myocardium, a range that might falsely indicate actionable ischemia, considering the 0.8 threshold for actionable FFR. With partial occlusion of the left anterior descending (LAD) artery (FFR < 0.8), perfusion defects at 70 keV were correctly identified in the LAD territory. At 120 kVp, BH affected the size and flow in the ischemic area; e.g. with FFR ≈ 0.65, the anterior-to-lateral flow ratio was 0.29 ± 0.01, over-estimating stenosis severity as compared to 0.42 ± 0.01 (p < 0.05) at 70 keV. On the non-ischemic inferior wall (not a LAD territory), the flow ratio was 0.50 ± 0.04 falsely indicating an actionable ischemic condition in a healthy
Perspective: How good is DFT for water?
Gillan, Michael J.; Alfè, Dario; Michaelides, Angelos
2016-04-01
Kohn-Sham density functional theory (DFT) has become established as an indispensable tool for investigating aqueous systems of all kinds, including those important in chemistry, surface science, biology, and the earth sciences. Nevertheless, many widely used approximations for the exchange-correlation (XC) functional describe the properties of pure water systems with an accuracy that is not fully satisfactory. The explicit inclusion of dispersion interactions generally improves the description, but there remain large disagreements between the predictions of different dispersion-inclusive methods. We present here a review of DFT work on water clusters, ice structures, and liquid water, with the aim of elucidating how the strengths and weaknesses of different XC approximations manifest themselves across this variety of water systems. Our review highlights the crucial role of dispersion in describing the delicate balance between compact and extended structures of many different water systems, including the liquid. By referring to a wide range of published work, we argue that the correct description of exchange-overlap interactions is also extremely important, so that the choice of semi-local or hybrid functional employed in dispersion-inclusive methods is crucial. The origins and consequences of beyond-2-body errors of approximate XC functionals are noted, and we also discuss the substantial differences between different representations of dispersion. We propose a simple numerical scoring system that rates the performance of different XC functionals in describing water systems, and we suggest possible future developments.
A DFT investigation of methanolysis and hydrolysis of triacetin
Limpanuparb, Taweetham; Tantirungrotechai, Yuthana; 10.1016/j.theochem.2010.05.022
2012-01-01
The thermodynamic and kinetic aspects of the methanolysis and hydrolysis reactions of glycerol triacetate or triacetin, a model triacylglycerol compound, were investigated by using Density Functional Theory (DFT) at the B3LYP/6-31++G(d,p) level of calculation. Twelve elementary steps of triacetin methanolysis were studied under acid-catalyzed and base-catalyzed conditions. The mechanism of acid-catalyzed methanolysis reaction which has not been reported yet for any esters was proposed. The effects of substitution, methanolysis/hydrolysis position, solvent and face of nucleophilic attack on the free energy of reaction and activation energy were examined. The prediction confirmed the facile position at the middle position of glycerol observed by NMR techniques. The calculated activation energy and the trends of those factors agree with existing experimental observations in biodiesel production.
Institute of Scientific and Technical Information of China (English)
Yan JIN; Jing-feng HUANG; Dai-liang PENG
2009-01-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environ-mental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit,and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.
Turnhout, van M.C.; Kranenbarg, S.; Leeuwen, van J.L.
2009-01-01
Quantitative polarized light microscopy (qPLM) is a popular tool for the investigation of birefringent architectures in biological tissues. Collagen, the most abundant protein in mammals, is such a birefringent material. Interpretation of results of qPLM in terms of collagen network architecture and
Standardized methods are often used to assess the likelihood of a human-health effect from exposure to a specified hazard, and inform opinions and decisions about risk management and communication. A Quantitative Microbial Risk Assessment (QMRA) is specifically adapted to detail potential human-heal...
Studying Biology to Understand Risk: Dosimetry Models and Quantitative Adverse Outcome Pathways
Confidence in the quantitative prediction of risk is increased when the prediction is based to as great an extent as possible on the relevant biological factors that constitute the pathway from exposure to adverse outcome. With the first examples now over 40 years old, physiologi...
Millimeter-resolution acousto-optic quantitative imaging in a tissue model system
Bratchenia, A.; Molenaar, R.; van Leeuwen, T.G.; Kooyman, R.P.H.
2009-01-01
We have investigated the application of ultrasound modulated coherent light for quantitative determination of the ratio of dye concentrations and total concentration of absorbers in a blood vessel mimicking sample. A 3-mm-diam tube containing the mixture of dyes inside an Intralipid-based gel with o
Pradeep, Prachi; Povinelli, Richard J; Merrill, Stephen J; Bozdag, Serdar; Sem, Daniel S
2015-04-01
The availability of large in vitro datasets enables better insight into the mode of action of chemicals and better identification of potential mechanism(s) of toxicity. Several studies have shown that not all in vitro assays can contribute as equal predictors of in vivo carcinogenicity for development of hybrid Quantitative Structure Activity Relationship (QSAR) models. We propose two novel approaches for the use of mechanistically relevant in vitro assay data in the identification of relevant biological descriptors and development of Quantitative Biological Activity Relationship (QBAR) models for carcinogenicity prediction. We demonstrate that in vitro assay data can be used to develop QBAR models for in vivo carcinogenicity prediction via two case studies corroborated with firm scientific rationale. The case studies demonstrate the similarities between QBAR and QSAR modeling in: (i) the selection of relevant descriptors to be used in the machine learning algorithm, and (ii) the development of a computational model that maps chemical or biological descriptors to a toxic endpoint. The results of both the case studies show: (i) improved accuracy and sensitivity which is especially desirable under regulatory requirements, and (ii) overall adherence with the OECD/REACH guidelines. Such mechanism based models can be used along with QSAR models for prediction of mechanistically complex toxic endpoints. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Morris, Melody K; Shriver, Zachary; Sasisekharan, Ram; Lauffenburger, Douglas A
2012-01-01
Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called “querying quantitative logic models” (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor. PMID:22125256
Shaik, O. S.; Kammerer, J.; Gorecki, J.; Lebiedz, D.
2005-12-01
Accurate experimental data increasingly allow the development of detailed elementary-step mechanisms for complex chemical and biochemical reaction systems. Model reduction techniques are widely applied to obtain representations in lower-dimensional phase space which are more suitable for mathematical analysis, efficient numerical simulation, and model-based control tasks. Here, we exploit a recently implemented numerical algorithm for error-controlled computation of the minimum dimension required for a still accurate reduced mechanism based on automatic time scale decomposition and relaxation of fast modes. We determine species contributions to the active (slow) dynamical modes of the reaction system and exploit this information in combination with quasi-steady-state and partial-equilibrium approximations for explicit model reduction of a novel detailed chemical mechanism for the Ru-catalyzed light-sensitive Belousov-Zhabotinsky reaction. The existence of a minimum dimension of seven is demonstrated to be mandatory for the reduced model to show good quantitative consistency with the full model in numerical simulations. We derive such a maximally reduced seven-variable model from the detailed elementary-step mechanism and demonstrate that it reproduces quantitatively accurately the dynamical features of the full model within a given accuracy tolerance.
Martins, C J A P
2016-01-01
This book sheds new light on topological defects in widely differing systems, using the Velocity-Dependent One-Scale Model to better understand their evolution. Topological defects – cosmic strings, monopoles, domain walls or others - necessarily form at cosmological (and condensed matter) phase transitions. If they are stable and long-lived they will be fossil relics of higher-energy physics. Understanding their behaviour and consequences is a key part of any serious attempt to understand the universe, and this requires modelling their evolution. The velocity-dependent one-scale model is the only fully quantitative model of defect network evolution, and the canonical model in the field. This book provides a review of the model, explaining its physical content and describing its broad range of applicability.
Directory of Open Access Journals (Sweden)
Panpan Hou
Full Text Available Kv1.3 channel is a delayed rectifier channel abundant in human T lymphocytes. Chronic inflammatory and autoimmune disorders lead to the over-expression of Kv1.3 in T cells. To quantitatively study the regulatory mechanism and physiological function of Kv1.3 in T cells, it is necessary to have a precise kinetic model of Kv1.3. In this study, we firstly established a kinetic model capable to precisely replicate all the kinetic features for Kv1.3 channels, and then constructed a T-cell model composed of ion channels including Ca2+-release activated calcium (CRAC channel, intermediate K+ (IK channel, TASK channel and Kv1.3 channel for quantitatively simulating the changes in membrane potentials and local Ca2+ signaling messengers during activation of T cells. Based on the experimental data from current-clamp recordings, we successfully demonstrated that Kv1.3 dominated the membrane potential of T cells to manipulate the Ca2+ influx via CRAC channel. Our results revealed that the deficient expression of Kv1.3 channel would cause the less Ca2+ signal, leading to the less efficiency in secretion. This was the first successful attempt to simulate membrane potential in non-excitable cells, which laid a solid basis for quantitatively studying the regulatory mechanism and physiological role of channels in non-excitable cells.
Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I
2010-06-22
Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein-protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data.
Downsampling of DFT Precoded Signals for the AWGN Channel
DEFF Research Database (Denmark)
Jensen, Tobias Lindstrøm; Fyhn, Karsten; Arildsen, Thomas;
2012-01-01
In this paper we propose and analyze a method for downsampling discrete Fourier transform (DFT) precoded signals. Since the symbols (in frequency) are in the constellation set, which is a subset of the entire complex plane, it is possible to detect N symbols from the DFT precoded signal when...
A Quantitative bgl Operon Model for E. coli Requires BglF Conformational Change for Sugar Transport
Chopra, Paras; Bender, Andreas
The bgl operon is responsible for the metabolism of β-glucoside sugars such as salicin or arbutin in E. coli. Its regulatory system involves both positive and negative feedback mechanisms and it can be assumed to be more complex than that of the more closely studied lac and trp operons. We have developed a quantitative model for the regulation of the bgl operon which is subject to in silico experiments investigating its behavior under different hypothetical conditions. Upon administration of 5mM salicin as an inducer our model shows 80-fold induction, which compares well with the 60-fold induction measured experimentally. Under practical conditions 5-10mM inducer are employed, which is in line with the minimum inducer concentration of 1mM required by our model. The necessity of BglF conformational change for sugar transport has been hypothesized previously, and in line with those hypotheses our model shows only minor induction if conformational change is not allowed. Overall, this first quantitative model for the bgl operon gives reasonable predictions that are close to experimental results (where measured). It will be further refined as values of the parameters are determined experimentally. The model was developed in Systems Biology Markup Language (SBML) and it is available from the authors and from the Biomodels repository [www.ebi.ac.uk/biomodels].
A DFT+nonhomogeneous DMFT approach for finite systems.
Kabir, Alamgir; Turkowski, Volodymyr; Rahman, Talat S
2015-04-01
For reliable and efficient inclusion of electron-electron correlation effects in nanosystems we formulate a combined density functional theory/nonhomogeneous dynamical mean-field theory (DFT+DMFT) approach which employs an approximate iterated perturbation theory impurity solver. We further apply the method to examine the size-dependent magnetic properties of iron nanoparticles containing 11-100 atoms. We show that for the majority of clusters the DFT+DMFT solution is in very good agreement with experimental data, much better compared to the DFT and DFT+U results. In particular, it reproduces the oscillations in magnetic moment with size as observed experimentally. We thus demonstrate that the DFT+DMFT approach can be used for accurate and realistic description of nanosystems containing about hundred atoms.
Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús
2009-01-01
Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative