WorldWideScience

Sample records for scale biomolecular simulations

  1. Development of an informatics infrastructure for data exchange of biomolecular simulations: Architecture, data models and ontology.

    Science.gov (United States)

    Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.

  2. Stochastic Simulation of Biomolecular Reaction Networks Using the Biomolecular Network Simulator Software

    National Research Council Canada - National Science Library

    Frazier, John; Chusak, Yaroslav; Foy, Brent

    2008-01-01

    .... The software uses either exact or approximate stochastic simulation algorithms for generating Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks...

  3. Biomolecular simulations on petascale: promises and challenges

    International Nuclear Information System (INIS)

    Agarwal, Pratul K; Alam, Sadaf R

    2006-01-01

    Proteins work as highly efficient machines at the molecular level and are responsible for a variety of processes in all living cells. There is wide interest in understanding these machines for implications in biochemical/biotechnology industries as well as in health related fields. Over the last century, investigations of proteins based on a variety of experimental techniques have provided a wealth of information. More recently, theoretical and computational modeling using large scale simulations is providing novel insights into the functioning of these machines. The next generation supercomputers with petascale computing power, hold great promises as well as challenges for the biomolecular simulation scientists. We briefly discuss the progress being made in this area

  4. Biomolecular simulation: historical picture and future perspectives.

    Science.gov (United States)

    van Gunsteren, Wilfred F; Dolenc, Jozica

    2008-02-01

    Over the last 30 years, computation based on molecular models is playing an increasingly important role in biology, biological chemistry and biophysics. Since only a very limited number of properties of biomolecular systems are actually accessible to measurement by experimental means, computer simulation complements experiments by providing not only averages, but also distributions and time series of any definable, observable or non-observable, quantity. Biomolecular simulation may be used (i) to interpret experimental data, (ii) to provoke new experiments, (iii) to replace experiments and (iv) to protect intellectual property. Progress over the last 30 years is sketched and perspectives are outlined for the future.

  5. Biomolecular modelling and simulations

    CERN Document Server

    Karabencheva-Christova, Tatyana

    2014-01-01

    Published continuously since 1944, the Advances in Protein Chemistry and Structural Biology series is the essential resource for protein chemists. Each volume brings forth new information about protocols and analysis of proteins. Each thematically organized volume is guest edited by leading experts in a broad range of protein-related topics. Describes advances in biomolecular modelling and simulations Chapters are written by authorities in their field Targeted to a wide audience of researchers, specialists, and students The information provided in the volume is well supported by a number of high quality illustrations, figures, and tables.

  6. GROMOS++Software for the Analysis of Biomolecular Simulation Trajectories

    NARCIS (Netherlands)

    Eichenberger, A.P.; Allison, J.R.; Dolenc, J.; Geerke, D.P.; Horta, B.A.C.; Meier, K; Oostenbrink, B.C.; Schmid, N.; Steiner, D; Wang, D.; van Gunsteren, W.F.

    2011-01-01

    GROMOS++ is a set of C++ programs for pre- and postprocessing of molecular dynamics simulation trajectories and as such is part of the GROningen MOlecular Simulation software for (bio)molecular simulation. It contains more than 70 programs that can be used to prepare data for the production of

  7. Biomolecular structure refinement using the GROMOS simulation software

    International Nuclear Information System (INIS)

    Schmid, Nathan; Allison, Jane R.; Dolenc, Jožica; Eichenberger, Andreas P.; Kunz, Anna-Pitschna E.; Gunsteren, Wilfred F. van

    2011-01-01

    For the understanding of cellular processes the molecular structure of biomolecules has to be accurately determined. Initial models can be significantly improved by structure refinement techniques. Here, we present the refinement methods and analysis techniques implemented in the GROMOS software for biomolecular simulation. The methodology and some implementation details of the computation of NMR NOE data, 3 J-couplings and residual dipolar couplings, X-ray scattering intensities from crystals and solutions and neutron scattering intensities used in GROMOS is described and refinement strategies and concepts are discussed using example applications. The GROMOS software allows structure refinement combining different types of experimental data with different types of restraining functions, while using a variety of methods to enhance conformational searching and sampling and the thermodynamically calibrated GROMOS force field for biomolecular simulation.

  8. Biomolecular structure refinement using the GROMOS simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Schmid, Nathan; Allison, Jane R.; Dolenc, Jozica; Eichenberger, Andreas P.; Kunz, Anna-Pitschna E.; Gunsteren, Wilfred F. van, E-mail: wfvgn@igc.phys.chem.ethz.ch [Swiss Federal Institute of Technology ETH, Laboratory of Physical Chemistry (Switzerland)

    2011-11-15

    For the understanding of cellular processes the molecular structure of biomolecules has to be accurately determined. Initial models can be significantly improved by structure refinement techniques. Here, we present the refinement methods and analysis techniques implemented in the GROMOS software for biomolecular simulation. The methodology and some implementation details of the computation of NMR NOE data, {sup 3}J-couplings and residual dipolar couplings, X-ray scattering intensities from crystals and solutions and neutron scattering intensities used in GROMOS is described and refinement strategies and concepts are discussed using example applications. The GROMOS software allows structure refinement combining different types of experimental data with different types of restraining functions, while using a variety of methods to enhance conformational searching and sampling and the thermodynamically calibrated GROMOS force field for biomolecular simulation.

  9. Application of Hidden Markov Models in Biomolecular Simulations.

    Science.gov (United States)

    Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar

    2017-01-01

    Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.

  10. Theoretical restrictions on longest implicit time scales in Markov state models of biomolecular dynamics

    Science.gov (United States)

    Sinitskiy, Anton V.; Pande, Vijay S.

    2018-01-01

    Markov state models (MSMs) have been widely used to analyze computer simulations of various biomolecular systems. They can capture conformational transitions much slower than an average or maximal length of a single molecular dynamics (MD) trajectory from the set of trajectories used to build the MSM. A rule of thumb claiming that the slowest implicit time scale captured by an MSM should be comparable by the order of magnitude to the aggregate duration of all MD trajectories used to build this MSM has been known in the field. However, this rule has never been formally proved. In this work, we present analytical results for the slowest time scale in several types of MSMs, supporting the above rule. We conclude that the slowest implicit time scale equals the product of the aggregate sampling and four factors that quantify: (1) how much statistics on the conformational transitions corresponding to the longest implicit time scale is available, (2) how good the sampling of the destination Markov state is, (3) the gain in statistics from using a sliding window for counting transitions between Markov states, and (4) a bias in the estimate of the implicit time scale arising from finite sampling of the conformational transitions. We demonstrate that in many practically important cases all these four factors are on the order of unity, and we analyze possible scenarios that could lead to their significant deviation from unity. Overall, we provide for the first time analytical results on the slowest time scales captured by MSMs. These results can guide further practical applications of MSMs to biomolecular dynamics and allow for higher computational efficiency of simulations.

  11. Converting biomolecular modelling data based on an XML representation.

    Science.gov (United States)

    Sun, Yudong; McKeever, Steve

    2008-08-25

    Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  12. Converting Biomolecular Modelling Data Based on an XML Representation

    Directory of Open Access Journals (Sweden)

    Sun Yudong

    2008-06-01

    Full Text Available Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language. BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  13. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    Science.gov (United States)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  14. ANCA: Anharmonic Conformational Analysis of Biomolecular Simulations.

    Science.gov (United States)

    Parvatikar, Akash; Vacaliuc, Gabriel S; Ramanathan, Arvind; Chennubhotla, S Chakra

    2018-05-08

    Anharmonicity in time-dependent conformational fluctuations is noted to be a key feature of functional dynamics of biomolecules. Although anharmonic events are rare, long-timescale (μs-ms and beyond) simulations facilitate probing of such events. We have previously developed quasi-anharmonic analysis to resolve higher-order spatial correlations and characterize anharmonicity in biomolecular simulations. In this article, we have extended this toolbox to resolve higher-order temporal correlations and built a scalable Python package called anharmonic conformational analysis (ANCA). ANCA has modules to: 1) measure anharmonicity in the form of higher-order statistics and its variation as a function of time, 2) output a storyboard representation of the simulations to identify key anharmonic conformational events, and 3) identify putative anharmonic conformational substates and visualization of transitions between these substates. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. Electrostatics in biomolecular simulations : where are we now and where are we heading?

    NARCIS (Netherlands)

    Karttunen, M.E.J.; Rottler, J.; Vattulainen, I.; Sagui, C.

    2008-01-01

    Chapter 2. In this review, we discuss current methods and developments in the treatment of electrostatic interactions in biomolecular and soft matter simulations. We review the current ‘work horses’, namely, Ewald summation based methods such the Particle-Mesh Ewald, and others, and also newer

  16. Perspective: Markov models for long-timescale biomolecular dynamics

    International Nuclear Information System (INIS)

    Schwantes, C. R.; McGibbon, R. T.; Pande, V. S.

    2014-01-01

    Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics

  17. Perspective: Markov models for long-timescale biomolecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Schwantes, C. R.; McGibbon, R. T. [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Pande, V. S., E-mail: pande@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Department of Computer Science, Stanford University, Stanford, California 94305 (United States); Department of Structural Biology, Stanford University, Stanford, California 94305 (United States); Biophysics Program, Stanford University, Stanford, California 94305 (United States)

    2014-09-07

    Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.

  18. A fast mollified impulse method for biomolecular atomistic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)

    2017-03-15

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.

  19. RPYFMM: Parallel adaptive fast multipole method for Rotne-Prager-Yamakawa tensor in biomolecular hydrodynamics simulations

    Science.gov (United States)

    Guan, W.; Cheng, X.; Huang, J.; Huber, G.; Li, W.; McCammon, J. A.; Zhang, B.

    2018-06-01

    RPYFMM is a software package for the efficient evaluation of the potential field governed by the Rotne-Prager-Yamakawa (RPY) tensor interactions in biomolecular hydrodynamics simulations. In our algorithm, the RPY tensor is decomposed as a linear combination of four Laplace interactions, each of which is evaluated using the adaptive fast multipole method (FMM) (Greengard and Rokhlin, 1997) where the exponential expansions are applied to diagonalize the multipole-to-local translation operators. RPYFMM offers a unified execution on both shared and distributed memory computers by leveraging the DASHMM library (DeBuhr et al., 2016, 2018). Preliminary numerical results show that the interactions for a molecular system of 15 million particles (beads) can be computed within one second on a Cray XC30 cluster using 12,288 cores, while achieving approximately 54% strong-scaling efficiency.

  20. Biochemical Stability Analysis of Nano Scaled Contrast Agents Used in Biomolecular Imaging Detection of Tumor Cells

    Science.gov (United States)

    Kim, Jennifer; Kyung, Richard

    Imaging contrast agents are materials used to improve the visibility of internal body structures in the imaging process. Many agents that are used for contrast enhancement are now studied empirically and computationally by researchers. Among various imaging techniques, magnetic resonance imaging (MRI) has become a major diagnostic tool in many clinical specialties due to its non-invasive characteristic and its safeness in regards to ionizing radiation exposure. Recently, researchers have prepared aqueous fullerene nanoparticles using electrochemical methods. In this paper, computational simulations of thermodynamic stabilities of nano scaled contrast agents that can be used in biomolecular imaging detection of tumor cells are presented using nanomaterials such as fluorescent functionalized fullerenes. In addition, the stability and safety of different types of contrast agents composed of metal oxide a, b, and c are tested in the imaging process. Through analysis of the computational simulations, the stabilities of the contrast agents, determined by optimized energies of the conformations, are presented. The resulting numerical data are compared. In addition, Density Functional Theory (DFT) is used in order to model the electron properties of the compound.

  1. Modeling, Analysis, Simulation, and Synthesis of Biomolecular Networks

    National Research Council Canada - National Science Library

    Ruben, Harvey; Kumar, Vijay; Sokolsky, Oleg

    2006-01-01

    ...) a first example of reachability analysis applied to a biomolecular system (lactose induction), 4) a model of tetracycline resistance that discriminates between two possible mechanisms for tetracycline diffusion through the cell membrane, and 5...

  2. Biomolecular condensates: organizers of cellular biochemistry.

    Science.gov (United States)

    Banani, Salman F; Lee, Hyun O; Hyman, Anthony A; Rosen, Michael K

    2017-05-01

    Biomolecular condensates are micron-scale compartments in eukaryotic cells that lack surrounding membranes but function to concentrate proteins and nucleic acids. These condensates are involved in diverse processes, including RNA metabolism, ribosome biogenesis, the DNA damage response and signal transduction. Recent studies have shown that liquid-liquid phase separation driven by multivalent macromolecular interactions is an important organizing principle for biomolecular condensates. With this physical framework, it is now possible to explain how the assembly, composition, physical properties and biochemical and cellular functions of these important structures are regulated.

  3. Thermodynamic properties of water solvating biomolecular surfaces

    Science.gov (United States)

    Heyden, Matthias

    Changes in the potential energy and entropy of water molecules hydrating biomolecular interfaces play a significant role for biomolecular solubility and association. Free energy perturbation and thermodynamic integration methods allow calculations of free energy differences between two states from simulations. However, these methods are computationally demanding and do not provide insights into individual thermodynamic contributions, i.e. changes in the solvent energy or entropy. Here, we employ methods to spatially resolve distributions of hydration water thermodynamic properties in the vicinity of biomolecular surfaces. This allows direct insights into thermodynamic signatures of the hydration of hydrophobic and hydrophilic solvent accessible sites of proteins and small molecules and comparisons to ideal model surfaces. We correlate dynamic properties of hydration water molecules, i.e. translational and rotational mobility, to their thermodynamics. The latter can be used as a guide to extract thermodynamic information from experimental measurements of site-resolved water dynamics. Further, we study energy-entropy compensations of water at different hydration sites of biomolecular surfaces. This work is supported by the Cluster of Excellence RESOLV (EXC 1069) funded by the Deutsche Forschungsgemeinschaft.

  4. Biomolecular engineering for nanobio/bionanotechnology

    Science.gov (United States)

    Nagamune, Teruyuki

    2017-04-01

    Biomolecular engineering can be used to purposefully manipulate biomolecules, such as peptides, proteins, nucleic acids and lipids, within the framework of the relations among their structures, functions and properties, as well as their applicability to such areas as developing novel biomaterials, biosensing, bioimaging, and clinical diagnostics and therapeutics. Nanotechnology can also be used to design and tune the sizes, shapes, properties and functionality of nanomaterials. As such, there are considerable overlaps between nanotechnology and biomolecular engineering, in that both are concerned with the structure and behavior of materials on the nanometer scale or smaller. Therefore, in combination with nanotechnology, biomolecular engineering is expected to open up new fields of nanobio/bionanotechnology and to contribute to the development of novel nanobiomaterials, nanobiodevices and nanobiosystems. This review highlights recent studies using engineered biological molecules (e.g., oligonucleotides, peptides, proteins, enzymes, polysaccharides, lipids, biological cofactors and ligands) combined with functional nanomaterials in nanobio/bionanotechnology applications, including therapeutics, diagnostics, biosensing, bioanalysis and biocatalysts. Furthermore, this review focuses on five areas of recent advances in biomolecular engineering: (a) nucleic acid engineering, (b) gene engineering, (c) protein engineering, (d) chemical and enzymatic conjugation technologies, and (e) linker engineering. Precisely engineered nanobiomaterials, nanobiodevices and nanobiosystems are anticipated to emerge as next-generation platforms for bioelectronics, biosensors, biocatalysts, molecular imaging modalities, biological actuators, and biomedical applications.

  5. Biomolecular electrostatics and solvation: a computational perspective.

    Science.gov (United States)

    Ren, Pengyu; Chun, Jaehun; Thomas, Dennis G; Schnieders, Michael J; Marucho, Marcelo; Zhang, Jiajing; Baker, Nathan A

    2012-11-01

    An understanding of molecular interactions is essential for insight into biological systems at the molecular scale. Among the various components of molecular interactions, electrostatics are of special importance because of their long-range nature and their influence on polar or charged molecules, including water, aqueous ions, proteins, nucleic acids, carbohydrates, and membrane lipids. In particular, robust models of electrostatic interactions are essential for understanding the solvation properties of biomolecules and the effects of solvation upon biomolecular folding, binding, enzyme catalysis, and dynamics. Electrostatics, therefore, are of central importance to understanding biomolecular structure and modeling interactions within and among biological molecules. This review discusses the solvation of biomolecules with a computational biophysics view toward describing the phenomenon. While our main focus lies on the computational aspect of the models, we provide an overview of the basic elements of biomolecular solvation (e.g. solvent structure, polarization, ion binding, and non-polar behavior) in order to provide a background to understand the different types of solvation models.

  6. H++ 3.0: automating pK prediction and the preparation of biomolecular structures for atomistic molecular modeling and simulations.

    Science.gov (United States)

    Anandakrishnan, Ramu; Aguilar, Boris; Onufriev, Alexey V

    2012-07-01

    The accuracy of atomistic biomolecular modeling and simulation studies depend on the accuracy of the input structures. Preparing these structures for an atomistic modeling task, such as molecular dynamics (MD) simulation, can involve the use of a variety of different tools for: correcting errors, adding missing atoms, filling valences with hydrogens, predicting pK values for titratable amino acids, assigning predefined partial charges and radii to all atoms, and generating force field parameter/topology files for MD. Identifying, installing and effectively using the appropriate tools for each of these tasks can be difficult for novice and time-consuming for experienced users. H++ (http://biophysics.cs.vt.edu/) is a free open-source web server that automates the above key steps in the preparation of biomolecular structures for molecular modeling and simulations. H++ also performs extensive error and consistency checking, providing error/warning messages together with the suggested corrections. In addition to numerous minor improvements, the latest version of H++ includes several new capabilities and options: fix erroneous (flipped) side chain conformations for HIS, GLN and ASN, include a ligand in the input structure, process nucleic acid structures and generate a solvent box with specified number of common ions for explicit solvent MD.

  7. Laser photodissociation and spectroscopy of mass-separated biomolecular ions

    CERN Document Server

    Polfer, Nicolas C

    2014-01-01

    This lecture notes book presents how enhanced structural information of biomolecular ions can be obtained from interaction with photons of specific frequency - laser light. The methods described in the book ""Laser photodissociation and spectroscopy of mass-separated biomolecular ions"" make use of the fact that the discrete energy and fast time scale of photoexcitation can provide more control in ion activation. This activation is the crucial process producing structure-informative product ions that cannot be generated with more conventional heating methods, such as collisional activation. Th

  8. Optimal use of data in parallel tempering simulations for the construction of discrete-state Markov models of biomolecular dynamics.

    Science.gov (United States)

    Prinz, Jan-Hendrik; Chodera, John D; Pande, Vijay S; Swope, William C; Smith, Jeremy C; Noé, Frank

    2011-06-28

    Parallel tempering (PT) molecular dynamics simulations have been extensively investigated as a means of efficient sampling of the configurations of biomolecular systems. Recent work has demonstrated how the short physical trajectories generated in PT simulations of biomolecules can be used to construct the Markov models describing biomolecular dynamics at each simulated temperature. While this approach describes the temperature-dependent kinetics, it does not make optimal use of all available PT data, instead estimating the rates at a given temperature using only data from that temperature. This can be problematic, as some relevant transitions or states may not be sufficiently sampled at the temperature of interest, but might be readily sampled at nearby temperatures. Further, the comparison of temperature-dependent properties can suffer from the false assumption that data collected from different temperatures are uncorrelated. We propose here a strategy in which, by a simple modification of the PT protocol, the harvested trajectories can be reweighted, permitting data from all temperatures to contribute to the estimated kinetic model. The method reduces the statistical uncertainty in the kinetic model relative to the single temperature approach and provides estimates of transition probabilities even for transitions not observed at the temperature of interest. Further, the method allows the kinetics to be estimated at temperatures other than those at which simulations were run. We illustrate this method by applying it to the generation of a Markov model of the conformational dynamics of the solvated terminally blocked alanine peptide.

  9. Biomolecular Sciences: uniting Biology and Chemistry

    NARCIS (Netherlands)

    Vrieling, Engel

    2017-01-01

    Biomolecular Sciences: uniting Biology and Chemistry www.rug.nl/research/gbb The scientific discoveries in biomolecular sciences have benefitted enormously from technological innovations. At the Groningen Biomolecular Science and Biotechnology Institute (GBB) we now sequence a genome in days,

  10. SPATKIN: a simulator for rule-based modeling of biomolecular site dynamics on surfaces.

    Science.gov (United States)

    Kochanczyk, Marek; Hlavacek, William S; Lipniacki, Tomasz

    2017-11-15

    Rule-based modeling is a powerful approach for studying biomolecular site dynamics. Here, we present SPATKIN, a general-purpose simulator for rule-based modeling in two spatial dimensions. The simulation algorithm is a lattice-based method that tracks Brownian motion of individual molecules and the stochastic firing of rule-defined reaction events. Because rules are used as event generators, the algorithm is network-free, meaning that it does not require to generate the complete reaction network implied by rules prior to simulation. In a simulation, each molecule (or complex of molecules) is taken to occupy a single lattice site that cannot be shared with another molecule (or complex). SPATKIN is capable of simulating a wide array of membrane-associated processes, including adsorption, desorption and crowding. Models are specified using an extension of the BioNetGen language, which allows to account for spatial features of the simulated process. The C ++ source code for SPATKIN is distributed freely under the terms of the GNU GPLv3 license. The source code can be compiled for execution on popular platforms (Windows, Mac and Linux). An installer for 64-bit Windows and a macOS app are available. The source code and precompiled binaries are available at the SPATKIN Web site (http://pmbm.ippt.pan.pl/software/spatkin). spatkin.simulator@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. Perspective: Watching low-frequency vibrations of water in biomolecular recognition by THz spectroscopy

    Science.gov (United States)

    Xu, Yao; Havenith, Martina

    2015-11-01

    Terahertz (THz) spectroscopy has turned out to be a powerful tool which is able to shed new light on the role of water in biomolecular processes. The low frequency spectrum of the solvated biomolecule in combination with MD simulations provides deep insights into the collective hydrogen bond dynamics on the sub-ps time scale. The absorption spectrum between 1 THz and 10 THz of solvated biomolecules is sensitive to changes in the fast fluctuations of the water network. Systematic studies on mutants of antifreeze proteins indicate a direct correlation between biological activity and a retardation of the (sub)-ps hydration dynamics at the protein binding site, i.e., a "hydration funnel." Kinetic THz absorption studies probe the temporal changes of THz absorption during a biological process, and give access to the kinetics of the coupled protein-hydration dynamics. When combined with simulations, the observed results can be explained in terms of a two-tier model involving a local binding and a long range influence on the hydration bond dynamics of the water around the binding site that highlights the significance of the changes in the hydration dynamics at recognition site for biomolecular recognition. Water is shown to assist molecular recognition processes.

  12. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    Science.gov (United States)

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  13. A coarse-grained model for the simulations of biomolecular interactions in cellular environments

    International Nuclear Information System (INIS)

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2014-01-01

    The interactions of bio-molecules constitute the key steps of cellular functions. However, in vivo binding properties differ significantly from their in vitro measurements due to the heterogeneity of cellular environments. Here we introduce a coarse-grained model based on rigid-body representation to study how factors such as cellular crowding and membrane confinement affect molecular binding. The macroscopic parameters such as the equilibrium constant and the kinetic rate constant are calibrated by adjusting the microscopic coefficients used in the numerical simulations. By changing these model parameters that are experimentally approachable, we are able to study the kinetic and thermodynamic properties of molecular binding, as well as the effects caused by specific cellular environments. We investigate the volumetric effects of crowded intracellular space on bio-molecular diffusion and diffusion-limited reactions. Furthermore, the binding constants of membrane proteins are currently difficult to measure. We provide quantitative estimations about how the binding of membrane proteins deviates from soluble proteins under different degrees of membrane confinements. The simulation results provide biological insights to the functions of membrane receptors on cell surfaces. Overall, our studies establish a connection between the details of molecular interactions and the heterogeneity of cellular environments

  14. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

    Science.gov (United States)

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-07-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

  15. From dynamics to structure and function of model biomolecular systems

    NARCIS (Netherlands)

    Fontaine-Vive-Curtaz, F.

    2007-01-01

    The purpose of this thesis was to extend recent works on structure and dynamics of hydrogen bonded crystals to model biomolecular systems and biological processes. The tools that we have used are neutron scattering (NS) and density functional theory (DFT) and force field (FF) based simulation

  16. Scalable Molecular Dynamics for Large Biomolecular Systems

    Directory of Open Access Journals (Sweden)

    Robert K. Brunner

    2000-01-01

    Full Text Available We present an optimized parallelization scheme for molecular dynamics simulations of large biomolecular systems, implemented in the production-quality molecular dynamics program NAMD. With an object-based hybrid force and spatial decomposition scheme, and an aggressive measurement-based predictive load balancing framework, we have attained speeds and speedups that are much higher than any reported in literature so far. The paper first summarizes the broad methodology we are pursuing, and the basic parallelization scheme we used. It then describes the optimizations that were instrumental in increasing performance, and presents performance results on benchmark simulations.

  17. Prediction of Biomolecular Complexes

    KAUST Repository

    Vangone, Anna; Oliva, Romina; Cavallo, Luigi; Bonvin, Alexandre M. J. J.

    2017-01-01

    Almost all processes in living organisms occur through specific interactions between biomolecules. Any dysfunction of those interactions can lead to pathological events. Understanding such interactions is therefore a crucial step in the investigation of biological systems and a starting point for drug design. In recent years, experimental studies have been devoted to unravel the principles of biomolecular interactions; however, due to experimental difficulties in solving the three-dimensional (3D) structure of biomolecular complexes, the number of available, high-resolution experimental 3D structures does not fulfill the current needs. Therefore, complementary computational approaches to model such interactions are necessary to assist experimentalists since a full understanding of how biomolecules interact (and consequently how they perform their function) only comes from 3D structures which provide crucial atomic details about binding and recognition processes. In this chapter we review approaches to predict biomolecular complexesBiomolecular complexes, introducing the concept of molecular dockingDocking, a technique which uses a combination of geometric, steric and energetics considerations to predict the 3D structure of a biological complex starting from the individual structures of its constituent parts. We provide a mini-guide about docking concepts, its potential and challenges, along with post-docking analysis and a list of related software.

  18. Prediction of Biomolecular Complexes

    KAUST Repository

    Vangone, Anna

    2017-04-12

    Almost all processes in living organisms occur through specific interactions between biomolecules. Any dysfunction of those interactions can lead to pathological events. Understanding such interactions is therefore a crucial step in the investigation of biological systems and a starting point for drug design. In recent years, experimental studies have been devoted to unravel the principles of biomolecular interactions; however, due to experimental difficulties in solving the three-dimensional (3D) structure of biomolecular complexes, the number of available, high-resolution experimental 3D structures does not fulfill the current needs. Therefore, complementary computational approaches to model such interactions are necessary to assist experimentalists since a full understanding of how biomolecules interact (and consequently how they perform their function) only comes from 3D structures which provide crucial atomic details about binding and recognition processes. In this chapter we review approaches to predict biomolecular complexesBiomolecular complexes, introducing the concept of molecular dockingDocking, a technique which uses a combination of geometric, steric and energetics considerations to predict the 3D structure of a biological complex starting from the individual structures of its constituent parts. We provide a mini-guide about docking concepts, its potential and challenges, along with post-docking analysis and a list of related software.

  19. Biomolecular Science (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2012-04-01

    A brief fact sheet about NREL Photobiology and Biomolecular Science. The research goal of NREL's Biomolecular Science is to enable cost-competitive advanced lignocellulosic biofuels production by understanding the science critical for overcoming biomass recalcitrance and developing new product and product intermediate pathways. NREL's Photobiology focuses on understanding the capture of solar energy in photosynthetic systems and its use in converting carbon dioxide and water directly into hydrogen and advanced biofuels.

  20. Photochirogenesis: Photochemical Models on the Origin of Biomolecular Homochirality

    Directory of Open Access Journals (Sweden)

    Cornelia Meinert

    2010-05-01

    Full Text Available Current research focuses on a better understanding of the origin of biomolecular asymmetry by the identification and detection of the possibly first chiral molecules that were involved in the appearance and evolution of life on Earth. We have reasons to assume that these molecules were specific chiral amino acids. Chiral amino acids have been identified in both chondritic meteorites and simulated interstellar ices. Present research reasons that circularly polarized electromagnetic radiation was identified in interstellar environments and an asymmetric interstellar photon-molecule interaction might have triggered biomolecular symmetry breaking. We review on the possible prebiotic interaction of ‘chiral photons’ in the form of circularly polarized light, with early chiral organic molecules. We will highlight recent studies on enantioselective photolysis of racemic amino acids by circularly polarized light and experiments on the asymmetric photochemical synthesis of amino acids from only one C and one N containing molecules by simulating interstellar environments. Both approaches are based on circular dichroic transitions of amino acids that will be presented as well.

  1. Simulation of FRET dyes allows quantitative comparison against experimental data

    Science.gov (United States)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  2. Sop-GPU: accelerating biomolecular simulations in the centisecond timescale using graphics processors.

    Science.gov (United States)

    Zhmurov, A; Dima, R I; Kholodov, Y; Barsegov, V

    2010-11-01

    Theoretical exploration of fundamental biological processes involving the forced unraveling of multimeric proteins, the sliding motion in protein fibers and the mechanical deformation of biomolecular assemblies under physiological force loads is challenging even for distributed computing systems. Using a C(α)-based coarse-grained self organized polymer (SOP) model, we implemented the Langevin simulations of proteins on graphics processing units (SOP-GPU program). We assessed the computational performance of an end-to-end application of the program, where all the steps of the algorithm are running on a GPU, by profiling the simulation time and memory usage for a number of test systems. The ∼90-fold computational speedup on a GPU, compared with an optimized central processing unit program, enabled us to follow the dynamics in the centisecond timescale, and to obtain the force-extension profiles using experimental pulling speeds (v(f) = 1-10 μm/s) employed in atomic force microscopy and in optical tweezers-based dynamic force spectroscopy. We found that the mechanical molecular response critically depends on the conditions of force application and that the kinetics and pathways for unfolding change drastically even upon a modest 10-fold increase in v(f). This implies that, to resolve accurately the free energy landscape and to relate the results of single-molecule experiments in vitro and in silico, molecular simulations should be carried out under the experimentally relevant force loads. This can be accomplished in reasonable wall-clock time for biomolecules of size as large as 10(5) residues using the SOP-GPU package. © 2010 Wiley-Liss, Inc.

  3. Electron-correlated fragment-molecular-orbital calculations for biomolecular and nano systems.

    Science.gov (United States)

    Tanaka, Shigenori; Mochizuki, Yuji; Komeiji, Yuto; Okiyama, Yoshio; Fukuzawa, Kaori

    2014-06-14

    Recent developments in the fragment molecular orbital (FMO) method for theoretical formulation, implementation, and application to nano and biomolecular systems are reviewed. The FMO method has enabled ab initio quantum-mechanical calculations for large molecular systems such as protein-ligand complexes at a reasonable computational cost in a parallelized way. There have been a wealth of application outcomes from the FMO method in the fields of biochemistry, medicinal chemistry and nanotechnology, in which the electron correlation effects play vital roles. With the aid of the advances in high-performance computing, the FMO method promises larger, faster, and more accurate simulations of biomolecular and related systems, including the descriptions of dynamical behaviors in solvent environments. The current status and future prospects of the FMO scheme are addressed in these contexts.

  4. Unique temporal and spatial biomolecular emission profile on individual zinc oxide nanorods

    Science.gov (United States)

    Singh, Manpreet; Song, Sheng; Hahm, Jong-In

    2013-12-01

    Zinc oxide nanorods (ZnO NRs) have emerged in recent years as extremely useful, optical signal-enhancing platforms in DNA and protein detection. Although the use of ZnO NRs in biodetection has been demonstrated so far in systems involving many ZnO NRs per detection element, their future applications will likely take place in a miniaturized setting while exploiting single ZnO NRs in a low-volume, high-throughput bioanalysis. In this paper, we investigate temporal and spatial characteristics of the biomolecular fluorescence on individual ZnO NR systems. Quantitative and qualitative examinations of the biomolecular intensity and photostability are carried out as a function of two important criteria, the time and position along the long axis (length) of NRs. Photostability profiles are also measured with respect to the position on NRs and compared to those characteristics of biomolecules on polymeric control platforms. Unlike the uniformly distributed signal observed on the control platforms, both the fluorescence intensity and photostability are position-dependent on individual ZnO NRs. We have identified a unique phenomenon of highly localized, fluorescence intensification on the nanorod ends (FINE) of well-characterized, individual ZnO nanostructures. When compared to the polymeric controls, the biomolecular fluorescence intensity and photostability are determined to be higher on individual ZnO NRs regardless of the position on NRs. We have also carried out finite-difference time-domain simulations the results of which are in good agreement with the observed FINE. The outcomes of our investigation will offer a much needed basis for signal interpretation for biodetection devices and platforms consisting of single ZnO NRs and, at the same time, contribute significantly to provide insight in understanding the biomolecular fluorescence observed from ZnO NR ensemble-based systems.Zinc oxide nanorods (ZnO NRs) have emerged in recent years as extremely useful, optical

  5. A statistical nanomechanism of biomolecular patterning actuated by surface potential

    Science.gov (United States)

    Lin, Chih-Ting; Lin, Chih-Hao

    2011-02-01

    Biomolecular patterning on a nanoscale/microscale on chip surfaces is one of the most important techniques used in vitro biochip technologies. Here, we report upon a stochastic mechanics model we have developed for biomolecular patterning controlled by surface potential. The probabilistic biomolecular surface adsorption behavior can be modeled by considering the potential difference between the binding and nonbinding states. To verify our model, we experimentally implemented a method of electroactivated biomolecular patterning technology and the resulting fluorescence intensity matched the prediction of the developed model quite well. Based on this result, we also experimentally demonstrated the creation of a bovine serum albumin pattern with a width of 200 nm in 5 min operations. This submicron noncovalent-binding biomolecular pattern can be maintained for hours after removing the applied electrical voltage. These stochastic understandings and experimental results not only prove the feasibility of submicron biomolecular patterns on chips but also pave the way for nanoscale interfacial-bioelectrical engineering.

  6. Synergy of Two Highly Specific Biomolecular Recognition Events

    DEFF Research Database (Denmark)

    Ejlersen, Maria; Christensen, Niels Johan; Sørensen, Kasper K

    2018-01-01

    Two highly specific biomolecular recognition events, nucleic acid duplex hybridization and DNA-peptide recognition in the minor groove, were coalesced in a miniature ensemble for the first time by covalently attaching a natural AT-hook peptide motif to nucleic acid duplexes via a 2'-amino......-LNA scaffold. A combination of molecular dynamics simulations and ultraviolet thermal denaturation studies revealed high sequence-specific affinity of the peptide-oligonucleotide conjugates (POCs) when binding to complementary DNA strands, leveraging the bioinformation encrypted in the minor groove of DNA...

  7. Physics at the biomolecular interface fundamentals for molecular targeted therapy

    CERN Document Server

    Fernández, Ariel

    2016-01-01

    This book focuses primarily on the role of interfacial forces in understanding biological phenomena at the molecular scale. By providing a suitable statistical mechanical apparatus to handle the biomolecular interface, the book becomes uniquely positioned to address core problems in molecular biophysics. It highlights the importance of interfacial tension in delineating a solution to the protein folding problem, in unravelling the physico-chemical basis of enzyme catalysis and protein associations, and in rationally designing molecular targeted therapies. Thus grounded in fundamental science, the book develops a powerful technological platform for drug discovery, while it is set to inspire scientists at any level in their careers determined to address the major challenges in molecular biophysics. The acknowledgment of how exquisitely the structure and dynamics of proteins and their aqueous environment are related attests to the overdue recognition that biomolecular phenomena cannot be effectively understood w...

  8. Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.

    Directory of Open Access Journals (Sweden)

    Daniel L Parton

    2016-06-01

    Full Text Available The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (superfamilies, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest, reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human

  9. Biomolecular EPR spectroscopy

    CERN Document Server

    Hagen, Wilfred Raymond

    2008-01-01

    Comprehensive, Up-to-Date Coverage of Spectroscopy Theory and its Applications to Biological SystemsAlthough a multitude of books have been published about spectroscopy, most of them only occasionally refer to biological systems and the specific problems of biomolecular EPR (bioEPR). Biomolecular EPR Spectroscopy provides a practical introduction to bioEPR and demonstrates how this remarkable tool allows researchers to delve into the structural, functional, and analytical analysis of paramagnetic molecules found in the biochemistry of all species on the planet. A Must-Have Reference in an Intrinsically Multidisciplinary FieldThis authoritative reference seamlessly covers all important bioEPR applications, including low-spin and high-spin metalloproteins, spin traps and spin lables, interaction between active sites, and redox systems. It is loaded with practical tricks as well as do's and don'ts that are based on the author's 30 years of experience in the field. The book also comes with an unprecedented set of...

  10. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We identify a number of common features in programming that seem...... conspicuously absent from the literature on biomolecular computing; to partially redress this absence, we introduce a model of computation that is evidently programmable, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined...... by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways...

  11. Improvements to the APBS biomolecular solvation software suite.

    Science.gov (United States)

    Jurrus, Elizabeth; Engel, Dave; Star, Keith; Monson, Kyle; Brandi, Juan; Felberg, Lisa E; Brookes, David H; Wilson, Leighton; Chen, Jiahui; Liles, Karina; Chun, Minju; Li, Peter; Gohara, David W; Dolinsky, Todd; Konecny, Robert; Koes, David R; Nielsen, Jens Erik; Head-Gordon, Teresa; Geng, Weihua; Krasny, Robert; Wei, Guo-Wei; Holst, Michael J; McCammon, J Andrew; Baker, Nathan A

    2018-01-01

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that have provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses the three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this article, we discuss the models and capabilities that have recently been implemented within the APBS software package including a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory-based algorithm for determining pK a values, and an improved web-based visualization tool for viewing electrostatics. © 2017 The Protein Society.

  12. Molecular Dynamics Simulations of Kinetic Models for Chiral Dominance in Soft Condensed Matter

    DEFF Research Database (Denmark)

    Toxvaerd, Søren

    2001-01-01

    Molecular dynamics simulation, models for isomerization kinetics, origin of biomolecular chirality......Molecular dynamics simulation, models for isomerization kinetics, origin of biomolecular chirality...

  13. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    International Nuclear Information System (INIS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-01-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations

  14. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    Energy Technology Data Exchange (ETDEWEB)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de [Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany)

    2015-05-21

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  15. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  16. Membrane-based biomolecular smart materials

    International Nuclear Information System (INIS)

    Sarles, Stephen A; Leo, Donald J

    2011-01-01

    Membrane-based biomolecular materials are a new class of smart material that feature networks of artificial lipid bilayers contained within durable synthetic substrates. Bilayers contained within this modular material platform provide an environment that can be tailored to host an enormous diversity of functional biomolecules, where the functionality of the global material system depends on the type(s) and organization(s) of the biomolecules that are chosen. In this paper, we review a series of biomolecular material platforms developed recently within the Leo Group at Virginia Tech and we discuss several novel coupling mechanisms provided by these hybrid material systems. The platforms developed demonstrate that the functions of biomolecules and the properties of synthetic materials can be combined to operate in concert, and the examples provided demonstrate how the formation and properties of a lipid bilayer can respond to a variety of stimuli including mechanical forces and electric fields

  17. Conducting polymer based biomolecular electronic devices

    Indian Academy of Sciences (India)

    Conducting polymers; LB films; biosensor microactuators; monolayers. ... have been projected for applications for a wide range of biomolecular electronic devices such as optical, electronic, drug-delivery, memory and biosensing devices.

  18. Aligning Biomolecular Networks Using Modular Graph Kernels

    Science.gov (United States)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  19. Spin valve sensor for biomolecular identification: Design, fabrication, and characterization

    Science.gov (United States)

    Li, Guanxiong

    Biomolecular identification, e.g., DNA recognition, has broad applications in biology and medicine such as gene expression analysis, disease diagnosis, and DNA fingerprinting. Therefore, we have been developing a magnetic biodetection technology based on giant magnetoresistive spin valve sensors and magnetic nanoparticle (developed for the magnetic nanoparticle detection, assuming the equivalent average field of magnetic nanoparticles and the coherent rotation of spin valve free layer magnetization. Micromagnetic simulations have also been performed for the spin valve sensors. The analytical model and micromagnetic simulations are found consistent with each other and are in good agreement with experiments. The prototype spin valve sensors have been fabricated at both micron and submicron scales. We demonstrated the detection of a single 2.8-mum magnetic microbead by micron-sized spin valve sensors. Based on polymer-mediated self-assembly and fine lithography, a bilayer lift-off process was developed to deposit magnetic nanoparticles onto the sensor surface in a controlled manner. With the lift-off deposition method, we have successfully demonstrated the room temperature detection of monodisperse 16-nm Fe3O 4 nanoparticles in a quantity from a few tens to several hundreds by submicron spin valve sensors, proving the feasibility of the nanoparticle detection. As desired for quantitative biodetection, a fairly linear dependence of sensor signal on the number of nanoparticles has been confirmed. The initial detection of DNA hybridization events labeled by magnetic nanoparticles further proved the magnetic biodetection concept.

  20. Stereochemical errors and their implications for molecular dynamics simulations

    Directory of Open Access Journals (Sweden)

    Freddolino Peter L

    2011-05-01

    Full Text Available Abstract Background Biological molecules are often asymmetric with respect to stereochemistry, and correct stereochemistry is essential to their function. Molecular dynamics simulations of biomolecules have increasingly become an integral part of biophysical research. However, stereochemical errors in biomolecular structures can have a dramatic impact on the results of simulations. Results Here we illustrate the effects that chirality and peptide bond configuration flips may have on the secondary structure of proteins throughout a simulation. We also analyze the most common sources of stereochemical errors in biomolecular structures and present software tools to identify, correct, and prevent stereochemical errors in molecular dynamics simulations of biomolecules. Conclusions Use of the tools presented here should become a standard step in the preparation of biomolecular simulations and in the generation of predicted structural models for proteins and nucleic acids.

  1. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions.

    Science.gov (United States)

    Lu, Benzhuo; Holst, Michael J; McCammon, J Andrew; Zhou, Y C

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.

  2. Application of Nanodiamonds in Biomolecular Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Ping Cheng

    2010-03-01

    Full Text Available The combination of nanodiamond (ND with biomolecular mass spectrometry (MS makes rapid, sensitive detection of biopolymers from complex biosamples feasible. Due to its chemical inertness, optical transparency and biocompatibility, the advantage of NDs in MS study is unique. Furthermore, functionalization on the surfaces of NDs expands their application in the fields of proteomics and genomics for specific requirements greatly. This review presents methods of MS analysis based on solid phase extraction and elution on NDs and different application examples including peptide, protein, DNA, glycan and others. Owing to the quick development of nanotechnology, surface chemistry, new MS methods and the intense interest in proteomics and genomics, a huge increase of their applications in biomolecular MS analysis in the near future can be predicted.

  3. Simulating movement of tRNA through the ribosome during hybrid-state formation.

    Science.gov (United States)

    Whitford, Paul C; Sanbonmatsu, Karissa Y

    2013-09-28

    Biomolecular simulations provide a means for exploring the relationship between flexibility, energetics, structure, and function. With the availability of atomic models from X-ray crystallography and cryoelectron microscopy (cryo-EM), and rapid increases in computing capacity, it is now possible to apply molecular dynamics (MD) simulations to large biomolecular machines, and systematically partition the factors that contribute to function. A large biomolecular complex for which atomic models are available is the ribosome. In the cell, the ribosome reads messenger RNA (mRNA) in order to synthesize proteins. During this essential process, the ribosome undergoes a wide range of conformational rearrangements. One of the most poorly understood transitions is translocation: the process by which transfer RNA (tRNA) molecules move between binding sites inside of the ribosome. The first step of translocation is the adoption of a "hybrid" configuration by the tRNAs, which is accompanied by large-scale rotations in the ribosomal subunits. To illuminate the relationship between these rearrangements, we apply MD simulations using a multi-basin structure-based (SMOG) model, together with targeted molecular dynamics protocols. From 120 simulated transitions, we demonstrate the viability of a particular route during P/E hybrid-state formation, where there is asynchronous movement along rotation and tRNA coordinates. These simulations not only suggest an ordering of events, but they highlight atomic interactions that may influence the kinetics of hybrid-state formation. From these simulations, we also identify steric features (H74 and surrounding residues) encountered during the hybrid transition, and observe that flexibility of the single-stranded 3'-CCA tail is essential for it to reach the endpoint. Together, these simulations provide a set of structural and energetic signatures that suggest strategies for modulating the physical-chemical properties of protein synthesis by the

  4. Tailoring the Variational Implicit Solvent Method for New Challenges: Biomolecular Recognition and Assembly

    Directory of Open Access Journals (Sweden)

    Clarisse Gravina Ricci

    2018-02-01

    Full Text Available Predicting solvation free energies and describing the complex water behavior that plays an important role in essentially all biological processes is a major challenge from the computational standpoint. While an atomistic, explicit description of the solvent can turn out to be too expensive in large biomolecular systems, most implicit solvent methods fail to capture “dewetting” effects and heterogeneous hydration by relying on a pre-established (i.e., guessed solvation interface. Here we focus on the Variational Implicit Solvent Method, an implicit solvent method that adds water “plasticity” back to the picture by formulating the solvation free energy as a functional of all possible solvation interfaces. We survey VISM's applications to the problem of molecular recognition and report some of the most recent efforts to tailor VISM for more challenging scenarios, with the ultimate goal of including thermal fluctuations into the framework. The advances reported herein pave the way to make VISM a uniquely successful approach to characterize complex solvation properties in the recognition and binding of large-scale biomolecular complexes.

  5. Tailoring the Variational Implicit Solvent Method for New Challenges: Biomolecular Recognition and Assembly

    Science.gov (United States)

    Ricci, Clarisse Gravina; Li, Bo; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2018-01-01

    Predicting solvation free energies and describing the complex water behavior that plays an important role in essentially all biological processes is a major challenge from the computational standpoint. While an atomistic, explicit description of the solvent can turn out to be too expensive in large biomolecular systems, most implicit solvent methods fail to capture “dewetting” effects and heterogeneous hydration by relying on a pre-established (i.e., guessed) solvation interface. Here we focus on the Variational Implicit Solvent Method, an implicit solvent method that adds water “plasticity” back to the picture by formulating the solvation free energy as a functional of all possible solvation interfaces. We survey VISM's applications to the problem of molecular recognition and report some of the most recent efforts to tailor VISM for more challenging scenarios, with the ultimate goal of including thermal fluctuations into the framework. The advances reported herein pave the way to make VISM a uniquely successful approach to characterize complex solvation properties in the recognition and binding of large-scale biomolecular complexes. PMID:29484300

  6. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    Science.gov (United States)

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  7. NMRbox: A Resource for Biomolecular NMR Computation.

    Science.gov (United States)

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  8. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  9. Orientation of biomolecular assemblies in a microfluidic jet

    International Nuclear Information System (INIS)

    Priebe, M; Kalbfleisch, S; Tolkiehn, M; Salditt, T; Koester, S; Abel, B; Davies, R J

    2010-01-01

    We have investigated multilamellar lipid assemblies in a microfluidic jet, operating at high shear rates of the order of 10 7 s -1 . Compared to classical Couette cells or rheometers, the shear rate was increased by at least 2-3 orders of magnitude, and the sample volume was scaled down correspondingly. At the same time, the jet is characterized by high extensional stress due to elongational flow. A focused synchrotron x-ray beam was used to measure the structure and orientation of the lipid assemblies in the jet. The diffraction patterns indicate conventional multilamellar phases, aligned with the membrane normals oriented along the velocity gradient of the jet. The results indicate that the setup may be well suited for coherent diffractive imaging of oriented biomolecular assemblies and macromolecules at the future x-ray free electron laser (XFEL) sources.

  10. Quantifying the topography of the intrinsic energy landscape of flexible biomolecular recognition

    Science.gov (United States)

    Chu, Xiakun; Gan, Linfeng; Wang, Erkang; Wang, Jin

    2013-01-01

    Biomolecular functions are determined by their interactions with other molecules. Biomolecular recognition is often flexible and associated with large conformational changes involving both binding and folding. However, the global and physical understanding for the process is still challenging. Here, we quantified the intrinsic energy landscapes of flexible biomolecular recognition in terms of binding–folding dynamics for 15 homodimers by exploring the underlying density of states, using a structure-based model both with and without considering energetic roughness. By quantifying three individual effective intrinsic energy landscapes (one for interfacial binding, two for monomeric folding), the association mechanisms for flexible recognition of 15 homodimers can be classified into two-state cooperative “coupled binding–folding” and three-state noncooperative “folding prior to binding” scenarios. We found that the association mechanism of flexible biomolecular recognition relies on the interplay between the underlying effective intrinsic binding and folding energy landscapes. By quantifying the whole global intrinsic binding–folding energy landscapes, we found strong correlations between the landscape topography measure Λ (dimensionless ratio of energy gap versus roughness modulated by the configurational entropy) and the ratio of the thermodynamic stable temperature versus trapping temperature, as well as between Λ and binding kinetics. Therefore, the global energy landscape topography determines the binding–folding thermodynamics and kinetics, crucial for the feasibility and efficiency of realizing biomolecular function. We also found “U-shape” temperature-dependent kinetic behavior and a dynamical cross-over temperature for dividing exponential and nonexponential kinetics for two-state homodimers. Our study provides a unique way to bridge the gap between theory and experiments. PMID:23754431

  11. Computational methods to study the structure and dynamics of biomolecules and biomolecular processes from bioinformatics to molecular quantum mechanics

    CERN Document Server

    2014-01-01

    Since the second half of the 20th century machine computations have played a critical role in science and engineering. Computer-based techniques have become especially important in molecular biology, since they often represent the only viable way to gain insights into the behavior of a biological system as a whole. The complexity of biological systems, which usually needs to be analyzed on different time- and size-scales and with different levels of accuracy, requires the application of different approaches, ranging from comparative analysis of sequences and structural databases, to the analysis of networks of interdependence between cell components and processes, through coarse-grained modeling to atomically detailed simulations, and finally to molecular quantum mechanics. This book provides a comprehensive overview of modern computer-based techniques for computing the structure, properties and dynamics of biomolecules and biomolecular processes. The twenty-two chapters, written by scientists from all over t...

  12. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition.

    Directory of Open Access Journals (Sweden)

    Xiliang Zheng

    2015-04-01

    Full Text Available We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity, the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.

  13. A compact hard X-ray source for medical imaging and biomolecular studies

    International Nuclear Information System (INIS)

    Cline, D.B.; Green, M.A.; Kolonko, J.

    1995-01-01

    There are a large number of synchrotron light sources in the world. However, these sources are designed for physics, chemistry, and engineering studies. To our knowledge, none have been optimized for either medical imaging or biomolecular studies. There are special needs for these applications. We present here a preliminary design of a very compact source, small enough for a hospital or a biomolecular laboratory, that is suitable for these applications. (orig.)

  14. High-speed AFM for Studying Dynamic Biomolecular Processes

    Science.gov (United States)

    Ando, Toshio

    2008-03-01

    Biological molecules show their vital activities only in aqueous solutions. It had been one of dreams in biological sciences to directly observe biological macromolecules (protein, DNA) at work under a physiological condition because such observation is straightforward to understanding their dynamic behaviors and functional mechanisms. Optical microscopy has no sufficient spatial resolution and electron microscopy is not applicable to in-liquid samples. Atomic force microscopy (AFM) can visualize molecules in liquids at high resolution but its imaging rate was too low to capture dynamic biological processes. This slow imaging rate is because AFM employs mechanical probes (cantilevers) and mechanical scanners to detect the sample height at each pixel. It is quite difficult to quickly move a mechanical device of macroscopic size with sub-nanometer accuracy without producing unwanted vibrations. It is also difficult to maintain the delicate contact between a probe tip and fragile samples. Two key techniques are required to realize high-speed AFM for biological research; fast feedback control to maintain a weak tip-sample interaction force and a technique to suppress mechanical vibrations of the scanner. Various efforts have been carried out in the past decade to materialize high-speed AFM. The current high-speed AFM can capture images on video at 30-60 frames/s for a scan range of 250nm and 100 scan lines, without significantly disturbing week biomolecular interaction. Our recent studies demonstrated that this new microscope can reveal biomolecular processes such as myosin V walking along actin tracks and association/dissociation dynamics of chaperonin GroEL-GroES that occurs in a negatively cooperative manner. The capacity of nanometer-scale visualization of dynamic processes in liquids will innovate on biological research. In addition, it will open a new way to study dynamic chemical/physical processes of various phenomena that occur at the liquid-solid interfaces.

  15. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  16. The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.

    Directory of Open Access Journals (Sweden)

    Giulio Caravagna

    Full Text Available After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii a model of enzymatic futile cycle and (iii a genetic toggle switch. In (ii and (iii we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.

  17. The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.

    Science.gov (United States)

    Caravagna, Giulio; Mauri, Giancarlo; d'Onofrio, Alberto

    2013-01-01

    After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i) the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii) a model of enzymatic futile cycle and (iii) a genetic toggle switch. In (ii) and (iii) we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.

  18. Improvements to the APBS biomolecular solvation software suite: Improvements to the APBS Software Suite

    Energy Technology Data Exchange (ETDEWEB)

    Jurrus, Elizabeth [Pacific Northwest National Laboratory, Richland Washington; Engel, Dave [Pacific Northwest National Laboratory, Richland Washington; Star, Keith [Pacific Northwest National Laboratory, Richland Washington; Monson, Kyle [Pacific Northwest National Laboratory, Richland Washington; Brandi, Juan [Pacific Northwest National Laboratory, Richland Washington; Felberg, Lisa E. [University of California, Berkeley California; Brookes, David H. [University of California, Berkeley California; Wilson, Leighton [University of Michigan, Ann Arbor Michigan; Chen, Jiahui [Southern Methodist University, Dallas Texas; Liles, Karina [Pacific Northwest National Laboratory, Richland Washington; Chun, Minju [Pacific Northwest National Laboratory, Richland Washington; Li, Peter [Pacific Northwest National Laboratory, Richland Washington; Gohara, David W. [St. Louis University, St. Louis Missouri; Dolinsky, Todd [FoodLogiQ, Durham North Carolina; Konecny, Robert [University of California San Diego, San Diego California; Koes, David R. [University of Pittsburgh, Pittsburgh Pennsylvania; Nielsen, Jens Erik [Protein Engineering, Novozymes A/S, Copenhagen Denmark; Head-Gordon, Teresa [University of California, Berkeley California; Geng, Weihua [Southern Methodist University, Dallas Texas; Krasny, Robert [University of Michigan, Ann Arbor Michigan; Wei, Guo-Wei [Michigan State University, East Lansing Michigan; Holst, Michael J. [University of California San Diego, San Diego California; McCammon, J. Andrew [University of California San Diego, San Diego California; Baker, Nathan A. [Pacific Northwest National Laboratory, Richland Washington; Brown University, Providence Rhode Island

    2017-10-24

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that has provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this manuscript, we discuss the models and capabilities that have recently been implemented within the APBS software package including: a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory based algorithm for determining pKa values, and an improved web-based visualization tool for viewing electrostatics.

  19. THz time domain spectroscopy of biomolecular conformational modes

    International Nuclear Information System (INIS)

    Markelz, Andrea; Whitmire, Scott; Hillebrecht, Jay; Birge, Robert

    2002-01-01

    We discuss the use of terahertz time domain spectroscopy for studies of conformational flexibility and conformational change in biomolecules. Protein structural dynamics are vital to biological function with protein flexibility affecting enzymatic reaction rates and sensory transduction cycling times. Conformational mode dynamics occur on the picosecond timescale and with the collective vibrational modes associated with these large scale structural motions in the 1-100 cm -1 range. We have performed THz time domain spectroscopy (TTDS) of several biomolecular systems to explore the sensitivity of TTDS to distinguish different molecular species, different mutations within a single species and different conformations of a given biomolecule. We compare the measured absorbances to normal mode calculations and find that the TTDS absorbance reflects the density of normal modes determined by molecular mechanics calculations, and is sensitive to both conformation and mutation. These early studies demonstrate some of the advantages and limitations of using TTDS for the study of biomolecules

  20. Nanogap biosensors for electrical and label-free detection of biomolecular interactions

    International Nuclear Information System (INIS)

    Kyu Kim, Sang; Cho, Hyunmin; Park, Hye-Jung; Kwon, Dohyoung; Min Lee, Jeong; Hyun Chung, Bong

    2009-01-01

    We demonstrate nanogap biosensors for electrical and label-free detection of biomolecular interactions. Parallel fabrication of nanometer distance gaps has been achieved using a silicon anisotropic wet etching technique on a silicon-on-insulator (SOI) wafer with a finely controllable silicon device layer. Since silicon anisotropic wet etching resulted in a trapezoid-shaped structure whose end became narrower during the etching, the nanogap structure was simply fabricated on the device layer of a SOI wafer. The nanogap devices were individually addressable and a gap size of less than 60 nm was obtained. We demonstrate that the nanogap biosensors can electrically detect biomolecular interactions such as biotin/streptavidin and antigen/antibody pairs. The nanogap devices show a current increase when the proteins are bound to the surface. The current increases proportionally depending upon the concentrations of the molecules in the range of 100 fg ml -1 -100 ng ml -1 at 1 V bias. It is expected that the nanogap developed here could be a highly sensitive biosensor platform for label-free detection of biomolecular interactions.

  1. Selected topics in solution-phase biomolecular NMR spectroscopy

    Science.gov (United States)

    Kay, Lewis E.; Frydman, Lucio

    2017-05-01

    Solution bio-NMR spectroscopy continues to enjoy a preeminent role as an important tool in elucidating the structure and dynamics of a range of important biomolecules and in relating these to function. Equally impressive is how NMR continues to 'reinvent' itself through the efforts of many brilliant practitioners who ask increasingly demanding and increasingly biologically relevant questions. The ability to manipulate spin Hamiltonians - almost at will - to dissect the information of interest contributes to the success of the endeavor and ensures that the NMR technology will be well poised to contribute to as yet unknown frontiers in the future. As a tribute to the versatility of solution NMR in biomolecular studies and to the continued rapid advances in the field we present a Virtual Special Issue (VSI) that includes over 40 articles on various aspects of solution-state biomolecular NMR that have been published in the Journal of Magnetic Resonance in the past 7 years. These, in total, help celebrate the achievements of this vibrant field.

  2. Development of porous structure simulator for multi-scale simulation of irregular porous catalysts

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Suzuki, Ai; Sahnoun, Riadh; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A.; Miyamoto, Akira

    2008-01-01

    Efficient development of highly functional porous materials, used as catalysts in the automobile industry, demands a meticulous knowledge of the nano-scale interface at the electronic and atomistic scale. However, it is often difficult to correlate the microscopic interfacial interactions with macroscopic characteristics of the materials; for instance, the interaction between a precious metal and its support oxide with long-term sintering properties of the catalyst. Multi-scale computational chemistry approaches can contribute to bridge the gap between micro- and macroscopic characteristics of these materials; however this type of multi-scale simulations has been difficult to apply especially to porous materials. To overcome this problem, we have developed a novel mesoscopic approach based on a porous structure simulator. This simulator can construct automatically irregular porous structures on a computer, enabling simulations with complex meso-scale structures. Moreover, in this work we have developed a new method to simulate long-term sintering properties of metal particles on porous catalysts. Finally, we have applied the method to the simulation of sintering properties of Pt on alumina support. This newly developed method has enabled us to propose a multi-scale simulation approach for porous catalysts

  3. Sequence co-evolutionary information is a natural partner to minimally-frustrated models of biomolecular dynamics [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey K Noel

    2016-01-01

    Full Text Available Experimentally derived structural constraints have been crucial to the implementation of computational models of biomolecular dynamics. For example, not only does crystallography provide essential starting points for molecular simulations but also high-resolution structures permit for parameterization of simplified models. Since the energy landscapes for proteins and other biomolecules have been shown to be minimally frustrated and therefore funneled, these structure-based models have played a major role in understanding the mechanisms governing folding and many functions of these systems. Structural information, however, may be limited in many interesting cases. Recently, the statistical analysis of residue co-evolution in families of protein sequences has provided a complementary method of discovering residue-residue contact interactions involved in functional configurations. These functional configurations are often transient and difficult to capture experimentally. Thus, co-evolutionary information can be merged with that available for experimentally characterized low free-energy structures, in order to more fully capture the true underlying biomolecular energy landscape.

  4. An effective hierarchical model for the biomolecular covalent bond: an approach integrating artificial chemistry and an actual terrestrial life system.

    Science.gov (United States)

    Oohashi, Tsutomu; Ueno, Osamu; Maekawa, Tadao; Kawai, Norie; Nishina, Emi; Honda, Manabu

    2009-01-01

    Under the AChem paradigm and the programmed self-decomposition (PSD) model, we propose a hierarchical model for the biomolecular covalent bond (HBCB model). This model assumes that terrestrial organisms arrange their biomolecules in a hierarchical structure according to the energy strength of their covalent bonds. It also assumes that they have evolutionarily selected the PSD mechanism of turning biological polymers (BPs) into biological monomers (BMs) as an efficient biomolecular recycling strategy We have examined the validity and effectiveness of the HBCB model by coordinating two complementary approaches: biological experiments using existent terrestrial life, and simulation experiments using an AChem system. Biological experiments have shown that terrestrial life possesses a PSD mechanism as an endergonic, genetically regulated process and that hydrolysis, which decomposes a BP into BMs, is one of the main processes of such a mechanism. In simulation experiments, we compared different virtual self-decomposition processes. The virtual species in which the self-decomposition process mainly involved covalent bond cleavage from a BP to BMs showed evolutionary superiority over other species in which the self-decomposition process involved cleavage from BP to classes lower than BM. These converging findings strongly support the existence of PSD and the validity and effectiveness of the HBCB model.

  5. DNA algorithms of implementing biomolecular databases on a biological computer.

    Science.gov (United States)

    Chang, Weng-Long; Vasilakos, Athanasios V

    2015-01-01

    In this paper, DNA algorithms are proposed to perform eight operations of relational algebra (calculus), which include Cartesian product, union, set difference, selection, projection, intersection, join, and division, on biomolecular relational databases.

  6. Smartphones for cell and biomolecular detection.

    Science.gov (United States)

    Liu, Xiyuan; Lin, Tung-Yi; Lillehoj, Peter B

    2014-11-01

    Recent advances in biomedical science and technology have played a significant role in the development of new sensors and assays for cell and biomolecular detection. Generally, these efforts are aimed at reducing the complexity and costs associated with diagnostic testing so that it can be performed outside of a laboratory or hospital setting, requiring minimal equipment and user involvement. In particular, point-of-care (POC) testing offers immense potential for many important applications including medical diagnosis, environmental monitoring, food safety, and biosecurity. When coupled with smartphones, POC systems can offer portability, ease of use and enhanced functionality while maintaining performance. This review article focuses on recent advancements and developments in smartphone-based POC systems within the last 6 years with an emphasis on cell and biomolecular detection. These devices typically comprise multiple components, such as detectors, sample processors, disposable chips, batteries, and software, which are integrated with a commercial smartphone. One of the most important aspects of developing these systems is the integration of these components onto a compact and lightweight platform that requires minimal power. Researchers have demonstrated several promising approaches employing various detection schemes and device configurations, and it is expected that further developments in biosensors, battery technology and miniaturized electronics will enable smartphone-based POC technologies to become more mainstream tools in the scientific and biomedical communities.

  7. DNA-assisted swarm control in a biomolecular motor system.

    Science.gov (United States)

    Keya, Jakia Jannat; Suzuki, Ryuhei; Kabir, Arif Md Rashedul; Inoue, Daisuke; Asanuma, Hiroyuki; Sada, Kazuki; Hess, Henry; Kuzuya, Akinori; Kakugo, Akira

    2018-01-31

    In nature, swarming behavior has evolved repeatedly among motile organisms because it confers a variety of beneficial emergent properties. These include improved information gathering, protection from predators, and resource utilization. Some organisms, e.g., locusts, switch between solitary and swarm behavior in response to external stimuli. Aspects of swarming behavior have been demonstrated for motile supramolecular systems composed of biomolecular motors and cytoskeletal filaments, where cross-linkers induce large scale organization. The capabilities of such supramolecular systems may be further extended if the swarming behavior can be programmed and controlled. Here, we demonstrate that the swarming of DNA-functionalized microtubules (MTs) propelled by surface-adhered kinesin motors can be programmed and reversibly regulated by DNA signals. Emergent swarm behavior, such as translational and circular motion, can be selected by tuning the MT stiffness. Photoresponsive DNA containing azobenzene groups enables switching between solitary and swarm behavior in response to stimulation with visible or ultraviolet light.

  8. Biomolecular Structure Information from High-Speed Quantum Mechanical Electronic Spectra Calculation.

    Science.gov (United States)

    Seibert, Jakob; Bannwarth, Christoph; Grimme, Stefan

    2017-08-30

    A fully quantum mechanical (QM) treatment to calculate electronic absorption (UV-vis) and circular dichroism (CD) spectra of typical biomolecules with thousands of atoms is presented. With our highly efficient sTDA-xTB method, spectra averaged along structures from molecular dynamics (MD) simulations can be computed in a reasonable time frame on standard desktop computers. This way, nonequilibrium structure and conformational, as well as purely quantum mechanical effects like charge-transfer or exciton-coupling, are included. Different from other contemporary approaches, the entire system is treated quantum mechanically and neither fragmentation nor system-specific adjustment is necessary. Among the systems considered are a large DNA fragment, oligopeptides, and even entire proteins in an implicit solvent. We propose the method in tandem with experimental spectroscopy or X-ray studies for the elucidation of complex (bio)molecular structures including metallo-proteins like myoglobin.

  9. Molecular simulation study to examine the possibility of detecting collective motion in protein by inelastic neutron scattering

    International Nuclear Information System (INIS)

    Yasumasa, Joti; Nobuhiro, Go; Akio, Kitao; Nobuhiro, Go

    2003-01-01

    Inelastic and quasielastic neutron scattering gives the information on the dynamics of biological macromolecules. The combination of computer simulation with neutron scattering experiments allows us to characterize a wide range of dynamical phenomena in condensed phase bio-molecular systems. In this work, the dynamic structure factors in (Q,ω)-space were calculated by using the results of bio-molecular simulations. From the simulated inelastic neutron scattering spectra, we discuss the (Q,ω)-range and the resolution of a detector needed to observe function-related protein dynamics. (authors)

  10. Improving the Performance of the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  11. The HADDOCK web server for data-driven biomolecular docking

    NARCIS (Netherlands)

    de Vries, S.J.|info:eu-repo/dai/nl/304837717; van Dijk, M.|info:eu-repo/dai/nl/325811113; Bonvin, A.M.J.J.|info:eu-repo/dai/nl/113691238

    2010-01-01

    Computational docking is the prediction or modeling of the three-dimensional structure of a biomolecular complex, starting from the structures of the individual molecules in their free, unbound form. HADDOC K is a popular docking program that takes a datadriven approach to docking, with support for

  12. Scanning number and brightness yields absolute protein concentrations in live cells: a crucial parameter controlling functional bio-molecular interaction networks.

    Science.gov (United States)

    Papini, Christina; Royer, Catherine A

    2018-02-01

    Biological function results from properly timed bio-molecular interactions that transduce external or internal signals, resulting in any number of cellular fates, including triggering of cell-state transitions (division, differentiation, transformation, apoptosis), metabolic homeostasis and adjustment to changing physical or nutritional environments, amongst many more. These bio-molecular interactions can be modulated by chemical modifications of proteins, nucleic acids, lipids and other small molecules. They can result in bio-molecular transport from one cellular compartment to the other and often trigger specific enzyme activities involved in bio-molecular synthesis, modification or degradation. Clearly, a mechanistic understanding of any given high level biological function requires a quantitative characterization of the principal bio-molecular interactions involved and how these may change dynamically. Such information can be obtained using fluctation analysis, in particular scanning number and brightness, and used to build and test mechanistic models of the functional network to define which characteristics are the most important for its regulation.

  13. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    Science.gov (United States)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  14. Biomolecular logic systems: applications to biosensors and bioactuators

    Science.gov (United States)

    Katz, Evgeny

    2014-05-01

    The paper presents an overview of recent advances in biosensors and bioactuators based on the biocomputing concept. Novel biosensors digitally process multiple biochemical signals through Boolean logic networks of coupled biomolecular reactions and produce output in the form of YES/NO response. Compared to traditional single-analyte sensing devices, biocomputing approach enables a high-fidelity multi-analyte biosensing, particularly beneficial for biomedical applications. Multi-signal digital biosensors thus promise advances in rapid diagnosis and treatment of diseases by processing complex patterns of physiological biomarkers. Specifically, they can provide timely detection and alert to medical emergencies, along with an immediate therapeutic intervention. Application of the biocomputing concept has been successfully demonstrated for systems performing logic analysis of biomarkers corresponding to different injuries, particularly exemplified for liver injury. Wide-ranging applications of multi-analyte digital biosensors in medicine, environmental monitoring and homeland security are anticipated. "Smart" bioactuators, for example for signal-triggered drug release, were designed by interfacing switchable electrodes and biocomputing systems. Integration of novel biosensing and bioactuating systems with the biomolecular information processing systems keeps promise for further scientific advances and numerous practical applications.

  15. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  16. hPDB – Haskell library for processing atomic biomolecular structures in protein data bank format

    OpenAIRE

    Gajda, Michał Jan

    2013-01-01

    Background Protein DataBank file format is used for the majority of biomolecular data available today. Haskell is a lazy functional language that enjoys a high-level class-based type system, a growing collection of useful libraries and a reputation for efficiency. Findings I present a fast library for processing biomolecular data in the Protein Data Bank format. I present benchmarks indicating that this library is faster than other frequently used Protein Data Bank parsing programs. The propo...

  17. Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition

    International Nuclear Information System (INIS)

    Farnudi, Bahman; Vvedensky, Dimitri D

    2011-01-01

    Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.

  18. Multiscale Persistent Functions for Biomolecular Structure Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin [Nanyang Technological University (Singapore). Division of Mathematical Sciences, School of Physical, Mathematical Sciences and School of Biological Sciences; Li, Zhiming [Central China Normal University, Wuhan (China). Key Laboratory of Quark and Lepton Physics (MOE) and Institute of Particle Physics; Mu, Lin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division

    2017-11-02

    Here in this paper, we introduce multiscale persistent functions for biomolecular structure characterization. The essential idea is to combine our multiscale rigidity functions (MRFs) with persistent homology analysis, so as to construct a series of multiscale persistent functions, particularly multiscale persistent entropies, for structure characterization. To clarify the fundamental idea of our method, the multiscale persistent entropy (MPE) model is discussed in great detail. Mathematically, unlike the previous persistent entropy (Chintakunta et al. in Pattern Recognit 48(2):391–401, 2015; Merelli et al. in Entropy 17(10):6872–6892, 2015; Rucco et al. in: Proceedings of ECCS 2014, Springer, pp 117–128, 2016), a special resolution parameter is incorporated into our model. Various scales can be achieved by tuning its value. Physically, our MPE can be used in conformational entropy evaluation. More specifically, it is found that our method incorporates in it a natural classification scheme. This is achieved through a density filtration of an MRF built from angular distributions. To further validate our model, a systematical comparison with the traditional entropy evaluation model is done. Additionally, it is found that our model is able to preserve the intrinsic topological features of biomolecular data much better than traditional approaches, particularly for resolutions in the intermediate range. Moreover, by comparing with traditional entropies from various grid sizes, bond angle-based methods and a persistent homology-based support vector machine method (Cang et al. in Mol Based Math Biol 3:140–162, 2015), we find that our MPE method gives the best results in terms of average true positive rate in a classic protein structure classification test. More interestingly, all-alpha and all-beta protein classes can be clearly separated from each other with zero error only in our model. Finally, a special protein structure index (PSI) is proposed, for the first

  19. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  20. An Overview of Biomolecular Event Extraction from Scientific Documents.

    Science.gov (United States)

    Vanegas, Jorge A; Matos, Sérgio; González, Fabio; Oliveira, José L

    2015-01-01

    This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  1. An Overview of Biomolecular Event Extraction from Scientific Documents

    Directory of Open Access Journals (Sweden)

    Jorge A. Vanegas

    2015-01-01

    Full Text Available This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  2. Role of biomolecular logic systems in biosensors and bioactuators

    Science.gov (United States)

    Mailloux, Shay; Katz, Evgeny

    2014-09-01

    An overview of recent advances in biosensors and bioactuators based on biocomputing systems is presented. Biosensors digitally process multiple biochemical signals through Boolean logic networks of coupled biomolecular reactions and produce an output in the form of a YES/NO response. Compared to traditional single-analyte sensing devices, the biocomputing approach enables high-fidelity multianalyte biosensing, which is particularly beneficial for biomedical applications. Multisignal digital biosensors thus promise advances in rapid diagnosis and treatment of diseases by processing complex patterns of physiological biomarkers. Specifically, they can provide timely detection and alert medical personnel of medical emergencies together with immediate therapeutic intervention. Application of the biocomputing concept has been successfully demonstrated for systems performing logic analysis of biomarkers corresponding to different injuries, particularly as exemplified for liver injury. Wide-ranging applications of multianalyte digital biosensors in medicine, environmental monitoring, and homeland security are anticipated. "Smart" bioactuators, for signal-triggered drug release, for example, were designed by interfacing switchable electrodes with biocomputing systems. Integration of biosensing and bioactuating systems with biomolecular information processing systems advances the potential for further scientific innovations and various practical applications.

  3. Micro and Nanotechnologies Enhanced Biomolecular Sensing

    Directory of Open Access Journals (Sweden)

    Tza-Huei Wang

    2013-07-01

    Full Text Available This editorial summarizes some of the recent advances of micro and nanotechnology-based tools and devices for biomolecular detection. These include the incorporation of nanomaterials into a sensor surface or directly interfacing with molecular probes to enhance target detection via more rapid and sensitive responses, and the use of self-assembled organic/inorganic nanocomposites that inhibit exceptional spectroscopic properties to enable facile homogenous assays with efficient binding kinetics. Discussions also include some insight into microfluidic principles behind the development of an integrated sample preparation and biosensor platform toward a miniaturized and fully functional system for point of care applications.

  4. Single-molecule imaging and manipulation of biomolecular machines and systems.

    Science.gov (United States)

    Iino, Ryota; Iida, Tatsuya; Nakamura, Akihiko; Saita, Ei-Ichiro; You, Huijuan; Sako, Yasushi

    2018-02-01

    Biological molecular machines support various activities and behaviors of cells, such as energy production, signal transduction, growth, differentiation, and migration. We provide an overview of single-molecule imaging methods involving both small and large probes used to monitor the dynamic motions of molecular machines in vitro (purified proteins) and in living cells, and single-molecule manipulation methods used to measure the forces, mechanical properties and responses of biomolecules. We also introduce several examples of single-molecule analysis, focusing primarily on motor proteins and signal transduction systems. Single-molecule analysis is a powerful approach to unveil the operational mechanisms both of individual molecular machines and of systems consisting of many molecular machines. Quantitative, high-resolution single-molecule analyses of biomolecular systems at the various hierarchies of life will help to answer our fundamental question: "What is life?" This article is part of a Special Issue entitled "Biophysical Exploration of Dynamical Ordering of Biomolecular Systems" edited by Dr. Koichi Kato. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Changes in biomolecular profile in a single nucleolus during cell fixation.

    Science.gov (United States)

    Kuzmin, Andrey N; Pliss, Artem; Prasad, Paras N

    2014-11-04

    Fixation of biological sample is an essential technique applied in order to "freeze" in time the intracellular molecular content. However, fixation induces changes of the cellular molecular structure, which mask physiological distribution of biomolecules and bias interpretation of results. Accurate, sensitive, and comprehensive characterization of changes in biomolecular composition, occurring during fixation, is crucial for proper analysis of experimental data. Here we apply biomolecular component analysis for Raman spectra measured in the same nucleoli of HeLa cells before and after fixation by either formaldehyde solution or by chilled ethanol. It is found that fixation in formaldehyde does not strongly affect the Raman spectra of nucleolar biomolecular components, but may significantly decrease the nucleolar RNA concentration. At the same time, ethanol fixation leads to a proportional increase (up to 40%) in concentrations of nucleolar proteins and RNA, most likely due to cell shrinkage occurring in the presence of coagulant fixative. Ethanol fixation also triggers changes in composition of nucleolar proteome, as indicated by an overall reduction of the α-helical structure of proteins and increase in the concentration of proteins containing the β-sheet conformation. We conclude that cross-linking fixation is a more appropriate protocol for mapping of proteins in situ. At the same time, ethanol fixation is preferential for studies of RNA-containing macromolecules. We supplemented our quantitative Raman spectroscopic measurements with mapping of the protein and lipid macromolecular groups in live and fixed cells using coherent anti-Stokes Raman scattering nonlinear optical imaging.

  6. Electrochemical sensor for multiplex screening of genetically modified DNA: identification of biotech crops by logic-based biomolecular analysis.

    Science.gov (United States)

    Liao, Wei-Ching; Chuang, Min-Chieh; Ho, Ja-An Annie

    2013-12-15

    Genetically modified (GM) technique, one of the modern biomolecular engineering technologies, has been deemed as profitable strategy to fight against global starvation. Yet rapid and reliable analytical method is deficient to evaluate the quality and potential risk of such resulting GM products. We herein present a biomolecular analytical system constructed with distinct biochemical activities to expedite the computational detection of genetically modified organisms (GMOs). The computational mechanism provides an alternative to the complex procedures commonly involved in the screening of GMOs. Given that the bioanalytical system is capable of processing promoter, coding and species genes, affirmative interpretations succeed to identify specified GM event in terms of both electrochemical and optical fashions. The biomolecular computational assay exhibits detection capability of genetically modified DNA below sub-nanomolar level and is found interference-free by abundant coexistence of non-GM DNA. This bioanalytical system, furthermore, sophisticates in array fashion operating multiplex screening against variable GM events. Such a biomolecular computational assay and biosensor holds great promise for rapid, cost-effective, and high-fidelity screening of GMO. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Simulations of atomic-scale sliding friction

    DEFF Research Database (Denmark)

    Sørensen, Mads Reinholdt; Jacobsen, Karsten Wedel; Stoltze, Per

    1996-01-01

    Simulation studies of atomic-scale sliding friction have been performed for a number of tip-surface and surface-surface contacts consisting of copper atoms. Both geometrically very simple tip-surface structures and more realistic interface necks formed by simulated annealing have been studied....... Kinetic friction is observed to be caused by atomic-scale Stick and slip which occurs by nucleation and subsequent motion of dislocations preferably between close-packed {111} planes. Stick and slip seems ro occur in different situations. For single crystalline contacts without grain boundaries...... pinning of atoms near the boundary of the interface and is therefore more easily observed for smaller contacts. Depending on crystal orientation and load, frictional wear can also be seen in the simulations. In particular, for the annealed interface-necks which model contacts created by scanning tunneling...

  8. Affinity Capillary Electrophoresis – A Powerful Tool to Investigate Biomolecular Interactions

    Czech Academy of Sciences Publication Activity Database

    Kašička, Václav

    2017-01-01

    Roč. 30, č. 5 (2017), s. 248 ISSN 1471-6577 Institutional support: RVO:61388963 Keywords : capillary affinity electrophoresis * biomolecular interactions * binding constants Subject RIV: CB - Analytical Chemistry, Separation OBOR OECD: Analytical chemistry Impact factor: 0.663, year: 2016

  9. Multi-scale simulation for homogenization of cement media

    International Nuclear Information System (INIS)

    Abballe, T.

    2011-01-01

    To solve diffusion problems on cement media, two scales must be taken into account: a fine scale, which describes the micrometers wide microstructures present in the media, and a work scale, which is usually a few meters long. Direct numerical simulations are almost impossible because of the huge computational resources (memory, CPU time) required to assess both scales at the same time. To overcome this problem, we present in this thesis multi-scale resolution methods using both Finite Volumes and Finite Elements, along with their efficient implementations. More precisely, we developed a multi-scale simulation tool which uses the SALOME platform to mesh domains and post-process data, and the parallel calculation code MPCube to solve problems. This SALOME/MPCube tool can solve automatically and efficiently multi-scale simulations. Parallel structure of computer clusters can be use to dispatch the more time-consuming tasks. We optimized most functions to account for cement media specificities. We presents numerical experiments on various cement media samples, e.g. mortar and cement paste. From these results, we manage to compute a numerical effective diffusivity of our cement media and to reconstruct a fine scale solution. (author) [fr

  10. Nuclear magnetic resonance provides a quantitative description of protein conformational flexibility on physiologically important time scales.

    Science.gov (United States)

    Salmon, Loïc; Bouvignies, Guillaume; Markwick, Phineus; Blackledge, Martin

    2011-04-12

    A complete description of biomolecular activity requires an understanding of the nature and the role of protein conformational dynamics. In recent years, novel nuclear magnetic resonance-based techniques that provide hitherto inaccessible detail concerning biomolecular motions occurring on physiologically important time scales have emerged. Residual dipolar couplings (RDCs) provide precise information about time- and ensemble-averaged structural and dynamic processes with correlation times up to the millisecond and thereby encode key information for understanding biological activity. In this review, we present the application of two very different approaches to the quantitative description of protein motion using RDCs. The first is purely analytical, describing backbone dynamics in terms of diffusive motions of each peptide plane, using extensive statistical analysis to validate the proposed dynamic modes. The second is based on restraint-free accelerated molecular dynamics simulation, providing statistically sampled free energy-weighted ensembles that describe conformational fluctuations occurring on time scales from pico- to milliseconds, at atomic resolution. Remarkably, the results from these two approaches converge closely in terms of distribution and absolute amplitude of motions, suggesting that this kind of combination of analytical and numerical models is now capable of providing a unified description of protein conformational dynamics in solution.

  11. Probabilistic Simulation of Multi-Scale Composite Behavior

    Science.gov (United States)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  12. An optics-based variable-temperature assay system for characterizing thermodynamics of biomolecular reactions on solid support

    Energy Technology Data Exchange (ETDEWEB)

    Fei, Yiyan; Landry, James P.; Zhu, X. D., E-mail: xdzhu@physics.ucdavis.edu [Department of Physics, University of California, One Shields Avenue, Davis, California 95616 (United States); Li, Yanhong; Yu, Hai; Lau, Kam; Huang, Shengshu; Chokhawala, Harshal A.; Chen, Xi [Department of Chemistry, University of California, One Shields Avenue, Davis, California 95616 (United States)

    2013-11-15

    A biological state is equilibrium of multiple concurrent biomolecular reactions. The relative importance of these reactions depends on physiological temperature typically between 10 °C and 50 °C. Experimentally the temperature dependence of binding reaction constants reveals thermodynamics and thus details of these biomolecular processes. We developed a variable-temperature opto-fluidic system for real-time measurement of multiple (400–10 000) biomolecular binding reactions on solid supports from 10 °C to 60 °C within ±0.1 °C. We illustrate the performance of this system with investigation of binding reactions of plant lectins (carbohydrate-binding proteins) with 24 synthetic glycans (i.e., carbohydrates). We found that the lectin-glycan reactions in general can be enthalpy-driven, entropy-driven, or both, and water molecules play critical roles in the thermodynamics of these reactions.

  13. PREFACE: Radiation Damage in Biomolecular Systems (RADAM07)

    Science.gov (United States)

    McGuigan, Kevin G.

    2008-03-01

    The annual meeting of the COST P9 Action `Radiation damage in biomolecular systems' took place from 19-22 June 2007 in the Royal College of Surgeons in Ireland, in Dublin. The conference was structured into 5 Working Group sessions: Electrons and biomolecular interactions Ions and biomolecular interactions Radiation in physiological environments Theoretical developments for radiation damage Track structure in cells Each of the five working groups presented two sessions of invited talks. Professor Ron Chesser of Texas Tech University, USA gave a riveting plenary talk on `Mechanisms of Adaptive Radiation Responses in Mammals at Chernobyl' and the implications his work has on the Linear-No Threshold model of radiation damage. In addition, this was the first RADAM meeting to take place after the Alexander Litvenenko affair and we were fortunate to have one of the leading scientists involved in the European response Professor Herwig Paretzke of GSF-Institut für Strahlenschutz, Neuherberg, Germany, available to speak. The remaining contributions were presented in the poster session. A total of 72 scientific contributions (32 oral, 40 poster), presented by 97 participants from 22 different countries, gave an overview on the current progress in the 5 different subfields. A 1-day pre-conference `Early Researcher Tutorial Workshop' on the same topic kicked off on 19 June attended by more than 40 postgrads, postdocs and senior researchers. Twenty papers, based on these reports, are included in this volume of Journal of Physics: Conference Series. All the contributions in this volume were fully refereed, and they represent a sample of the courses, invited talks and contributed talks presented during RADAM07. The interdisciplinary RADAM07 conference brought together researchers from a variety of different fields with a common interest in biomolecular radiation damage. This is reflected by the disparate backgrounds of the authors of the papers presented in these proceedings

  14. HPDB-Haskell library for processing atomic biomolecular structures in Protein Data Bank format.

    Science.gov (United States)

    Gajda, Michał Jan

    2013-11-23

    Protein DataBank file format is used for the majority of biomolecular data available today. Haskell is a lazy functional language that enjoys a high-level class-based type system, a growing collection of useful libraries and a reputation for efficiency. I present a fast library for processing biomolecular data in the Protein Data Bank format. I present benchmarks indicating that this library is faster than other frequently used Protein Data Bank parsing programs. The proposed library also features a convenient iterator mechanism, and a simple API modeled after BioPython. I set a new standard for convenience and efficiency of Protein Data Bank processing in a Haskell library, and release it to open source.

  15. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  16. A Framework for Parallel Numerical Simulations on Multi-Scale Geometries

    KAUST Repository

    Varduhn, Vasco

    2012-06-01

    In this paper, an approach on performing numerical multi-scale simulations on fine detailed geometries is presented. In particular, the focus lies on the generation of sufficient fine mesh representations, whereas a resolution of dozens of millions of voxels is inevitable in order to sufficiently represent the geometry. Furthermore, the propagation of boundary conditions is investigated by using simulation results on the coarser simulation scale as input boundary conditions on the next finer scale. Finally, the applicability of our approach is shown on a two-phase simulation for flooding scenarios in urban structures running from a city wide scale to a fine detailed in-door scale on feature rich building geometries. © 2012 IEEE.

  17. The latest full-scale PWR simulator in Japan

    International Nuclear Information System (INIS)

    Nishimuru, Y.; Tagi, H.; Nakabayashi, T.

    2004-01-01

    The latest MHI Full-scale Simulator has an excellent system configuration, in both flexibility and extendability, and has highly sophisticated performance in PWR simulation by the adoption of CANAC-II and PRETTY codes. It also has an instructive character to display the plant's internal status, such as RCS condition, through animation. Further, the simulation has been verified to meet a functional examination at model plant, and with a scale model test result in a two-phase flow event, after evaluation for its accuracy. Thus, the Simulator can be devoted to a sophisticated and broad training course on PWR operation. (author)

  18. Computer Programming and Biomolecular Structure Studies: A Step beyond Internet Bioinformatics

    Science.gov (United States)

    Likic, Vladimir A.

    2006-01-01

    This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…

  19. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  20. Dynamic and label-free high-throughput detection of biomolecular interactions based on phase-shift interferometry

    Science.gov (United States)

    Li, Qiang; Huang, Guoliang; Gan, Wupeng; Chen, Shengyi

    2009-08-01

    Biomolecular interactions can be detected by many established technologies such as fluorescence imaging, surface plasmon resonance (SPR)[1-4], interferometry and radioactive labeling of the analyte. In this study, we have designed and constructed a label-free, real-time sensing platform and its operating imaging instrument that detects interactions using optical phase differences from the accumulation of biological material on solid substrates. This system allows us to monitor biomolecular interactions in real time and quantify concentration changes during micro-mixing processes by measuring the changes of the optical path length (OPD). This simple interferometric technology monitors the optical phase difference resulting from accumulated biomolecular mass. A label-free protein chip that forms a 4×4 probe array was designed and fabricated using a commercial microarray robot spotter on solid substrates. Two positive control probe lines of BSA (Bovine Serum Albumin) and two experimental human IgG and goat IgG was used. The binding of multiple protein targets was performed and continuously detected by using this label-free and real-time sensing platform.

  1. Scanning probe and optical tweezer investigations of biomolecular interactions

    International Nuclear Information System (INIS)

    Rigby-Singleton, Shellie

    2002-01-01

    A complex array of intermolecular forces controls the interactions between and within biological molecules. The desire to empirically explore the fundamental forces has led to the development of several biophysical techniques. Of these, the atomic force microscope (AFM) and the optical tweezers have been employed throughout this thesis to monitor the intermolecular forces involved in biomolecular interactions. The AFM is a well-established force sensing technique capable of measuring biomolecular interactions at a single molecule level. However, its versatility has not been extrapolated to the investigation of a drug-enzyme complex. The energy landscape for the force induced dissociation of the DHFR-methotrexate complex was studied. Revealing an energy barrier to dissociation located ∼0.3 nm from the bound state. Unfortunately, the AFM has a limited range of accessible loading rates and in order to profile the complete energy landscape alternative force sensing instrumentation should be considered, for example the BFP and optical tweezers. Thus, this thesis outlines the development and construction an optical trap capable of measuring intermolecular forces between biomolecules at the single molecule level. To demonstrate the force sensing abilities of the optical set up, proof of principle measurements were performed which investigate the interactions between proteins and polymer surfaces subjected to varying degrees of argon plasma treatment. Complementary data was gained from measurements performed independently by the AFM. Changes in polymer resistance to proteins as a response to changes in polymer surface chemistry were detected utilising both AFM and optical tweezers measurements. Finally, the AFM and optical tweezers were employed as ultrasensitive biosensors. Single molecule investigations of the antibody-antigen interaction between the cardiac troponin I marker and its complementary antibody, reveals the impact therapeutic concentrations of heparin have

  2. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  3. Visualizing functional motions of membrane transporters with molecular dynamics simulations.

    Science.gov (United States)

    Shaikh, Saher A; Li, Jing; Enkavi, Giray; Wen, Po-Chao; Huang, Zhijian; Tajkhorshid, Emad

    2013-01-29

    Computational modeling and molecular simulation techniques have become an integral part of modern molecular research. Various areas of molecular sciences continue to benefit from, indeed rely on, the unparalleled spatial and temporal resolutions offered by these technologies, to provide a more complete picture of the molecular problems at hand. Because of the continuous development of more efficient algorithms harvesting ever-expanding computational resources, and the emergence of more advanced and novel theories and methodologies, the scope of computational studies has expanded significantly over the past decade, now including much larger molecular systems and far more complex molecular phenomena. Among the various computer modeling techniques, the application of molecular dynamics (MD) simulation and related techniques has particularly drawn attention in biomolecular research, because of the ability of the method to describe the dynamical nature of the molecular systems and thereby to provide a more realistic representation, which is often needed for understanding fundamental molecular properties. The method has proven to be remarkably successful in capturing molecular events and structural transitions highly relevant to the function and/or physicochemical properties of biomolecular systems. Herein, after a brief introduction to the method of MD, we use a number of membrane transport proteins studied in our laboratory as examples to showcase the scope and applicability of the method and its power in characterizing molecular motions of various magnitudes and time scales that are involved in the function of this important class of membrane proteins.

  4. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  5. Application of biomolecular recognition via magnetic nanoparticle in nanobiotechnology

    Science.gov (United States)

    Shen, Wei-Zheng; Cetinel, Sibel; Montemagno, Carlo

    2018-05-01

    The marriage of biomolecular recognition and magnetic nanoparticle creates tremendous opportunities in the development of advanced technology both in academic research and in industrial sectors. In this paper, we review current progress on the magnetic nanoparticle-biomolecule hybrid systems, particularly employing the recognition pairs of DNA-DNA, DNA-protein, protein-protein, and protein-inorganics in several nanobiotechnology application areas, including molecular biology, diagnostics, medical treatment, industrial biocatalysts, and environmental separations.

  6. Multi-scaled explorations of binding-induced folding of intrinsically disordered protein inhibitor IA3 to its target enzyme.

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2011-04-01

    Full Text Available Biomolecular function is realized by recognition, and increasing evidence shows that recognition is determined not only by structure but also by flexibility and dynamics. We explored a biomolecular recognition process that involves a major conformational change - protein folding. In particular, we explore the binding-induced folding of IA3, an intrinsically disordered protein that blocks the active site cleft of the yeast aspartic proteinase saccharopepsin (YPrA by folding its own N-terminal residues into an amphipathic alpha helix. We developed a multi-scaled approach that explores the underlying mechanism by combining structure-based molecular dynamics simulations at the residue level with a stochastic path method at the atomic level. Both the free energy profile and the associated kinetic paths reveal a common scheme whereby IA3 binds to its target enzyme prior to folding itself into a helix. This theoretical result is consistent with recent time-resolved experiments. Furthermore, exploration of the detailed trajectories reveals the important roles of non-native interactions in the initial binding that occurs prior to IA3 folding. In contrast to the common view that non-native interactions contribute only to the roughness of landscapes and impede binding, the non-native interactions here facilitate binding by reducing significantly the entropic search space in the landscape. The information gained from multi-scaled simulations of the folding of this intrinsically disordered protein in the presence of its binding target may prove useful in the design of novel inhibitors of aspartic proteinases.

  7. Synthetic Approach to biomolecular science by cyborg supramolecular chemistry.

    Science.gov (United States)

    Kurihara, Kensuke; Matsuo, Muneyuki; Yamaguchi, Takumi; Sato, Sota

    2018-02-01

    To imitate the essence of living systems via synthetic chemistry approaches has been attempted. With the progress in supramolecular chemistry, it has become possible to synthesize molecules of a size and complexity close to those of biomacromolecules. Recently, the combination of precisely designed supramolecules with biomolecules has generated structural platforms for designing and creating unique molecular systems. Bridging between synthetic chemistry and biomolecular science is also developing methodologies for the creation of artificial cellular systems. This paper provides an overview of the recently expanding interdisciplinary research to fuse artificial molecules with biomolecules, that can deepen our understanding of the dynamical ordering of biomolecules. Using bottom-up approaches based on the precise chemical design, synthesis and hybridization of artificial molecules with biological materials have been realizing the construction of sophisticated platforms having the fundamental functions of living systems. The effective hybrid, molecular cyborg, approaches enable not only the establishment of dynamic systems mimicking nature and thus well-defined models for biophysical understanding, but also the creation of those with highly advanced, integrated functions. This article is part of a Special Issue entitled "Biophysical Exploration of Dynamical Ordering of Biomolecular Systems" edited by Dr. Koichi Kato. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Formalizing Knowledge in Multi-Scale Agent-Based Simulations.

    Science.gov (United States)

    Somogyi, Endre; Sluka, James P; Glazier, James A

    2016-10-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.

  9. Long-range interactions and parallel scalability in molecular simulations

    NARCIS (Netherlands)

    Patra, M.; Hyvönen, M.T.; Falck, E.; Sabouri-Ghomi, M.; Vattulainen, I.; Karttunen, M.E.J.

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform

  10. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  11. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  12. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  13. Simulation in full-scale mock-ups: an ergonomics evaluation method?

    DEFF Research Database (Denmark)

    Andersen, Simone Nyholm; Broberg, Ole

    2014-01-01

    This paper presents and exploratory study of four simulation sessions in full-scale mock-ups of future hospital facilities.......This paper presents and exploratory study of four simulation sessions in full-scale mock-ups of future hospital facilities....

  14. Biomolecular solid state NMR with magic-angle spinning at 25K.

    Science.gov (United States)

    Thurber, Kent R; Tycko, Robert

    2008-12-01

    A magic-angle spinning (MAS) probe has been constructed which allows the sample to be cooled with helium, while the MAS bearing and drive gases are nitrogen. The sample can be cooled to 25K using roughly 3 L/h of liquid helium, while the 4-mm diameter rotor spins at 6.7 kHz with good stability (+/-5 Hz) for many hours. Proton decoupling fields up to at least 130 kHz can be applied. This helium-cooled MAS probe enables a variety of one-dimensional and two-dimensional NMR experiments on biomolecular solids and other materials at low temperatures, with signal-to-noise proportional to 1/T. We show examples of low-temperature (13)C NMR data for two biomolecular samples, namely the peptide Abeta(14-23) in the form of amyloid fibrils and the protein HP35 in frozen glycerol/water solution. Issues related to temperature calibration, spin-lattice relaxation at low temperatures, paramagnetic doping of frozen solutions, and (13)C MAS NMR linewidths are discussed.

  15. Investigating biomolecular recognition at the cell surface using atomic force microscopy.

    Science.gov (United States)

    Wang, Congzhou; Yadavalli, Vamsi K

    2014-05-01

    Probing the interaction forces that drive biomolecular recognition on cell surfaces is essential for understanding diverse biological processes. Force spectroscopy has been a widely used dynamic analytical technique, allowing measurement of such interactions at the molecular and cellular level. The capabilities of working under near physiological environments, combined with excellent force and lateral resolution make atomic force microscopy (AFM)-based force spectroscopy a powerful approach to measure biomolecular interaction forces not only on non-biological substrates, but also on soft, dynamic cell surfaces. Over the last few years, AFM-based force spectroscopy has provided biophysical insight into how biomolecules on cell surfaces interact with each other and induce relevant biological processes. In this review, we focus on describing the technique of force spectroscopy using the AFM, specifically in the context of probing cell surfaces. We summarize recent progress in understanding the recognition and interactions between macromolecules that may be found at cell surfaces from a force spectroscopy perspective. We further discuss the challenges and future prospects of the application of this versatile technique. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Integrative NMR for biomolecular research

    International Nuclear Information System (INIS)

    Lee, Woonghee; Cornilescu, Gabriel; Dashti, Hesam; Eghbalnia, Hamid R.; Tonelli, Marco; Westler, William M.; Butcher, Samuel E.; Henzler-Wildman, Katherine A.; Markley, John L.

    2016-01-01

    NMR spectroscopy is a powerful technique for determining structural and functional features of biomolecules in physiological solution as well as for observing their intermolecular interactions in real-time. However, complex steps associated with its practice have made the approach daunting for non-specialists. We introduce an NMR platform that makes biomolecular NMR spectroscopy much more accessible by integrating tools, databases, web services, and video tutorials that can be launched by simple installation of NMRFAM software packages or using a cross-platform virtual machine that can be run on any standard laptop or desktop computer. The software package can be downloaded freely from the NMRFAM software download page ( http://pine.nmrfam.wisc.edu/download-packages.html http://pine.nmrfam.wisc.edu/download_packages.html ), and detailed instructions are available from the Integrative NMR Video Tutorial page ( http://pine.nmrfam.wisc.edu/integrative.html http://pine.nmrfam.wisc.edu/integrative.html ).

  17. Integrative NMR for biomolecular research

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu; Cornilescu, Gabriel; Dashti, Hesam; Eghbalnia, Hamid R.; Tonelli, Marco; Westler, William M.; Butcher, Samuel E.; Henzler-Wildman, Katherine A.; Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States)

    2016-04-15

    NMR spectroscopy is a powerful technique for determining structural and functional features of biomolecules in physiological solution as well as for observing their intermolecular interactions in real-time. However, complex steps associated with its practice have made the approach daunting for non-specialists. We introduce an NMR platform that makes biomolecular NMR spectroscopy much more accessible by integrating tools, databases, web services, and video tutorials that can be launched by simple installation of NMRFAM software packages or using a cross-platform virtual machine that can be run on any standard laptop or desktop computer. The software package can be downloaded freely from the NMRFAM software download page ( http://pine.nmrfam.wisc.edu/download-packages.html http://pine.nmrfam.wisc.edu/download{sub p}ackages.html ), and detailed instructions are available from the Integrative NMR Video Tutorial page ( http://pine.nmrfam.wisc.edu/integrative.html http://pine.nmrfam.wisc.edu/integrative.html ).

  18. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  19. GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Pronk, Sander [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Pall, Szilard [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Schulz, Roland [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Larsson, Per [Univ. of Virginia, Charlottesville, VA (United States); Bjelkmar, Par [Science for Life Lab., Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden); Apostolov, Rossen [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Shirts, Michael R. [Univ. of Virginia, Charlottesville, VA (United States); Smith, Jeremy C. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasson, Peter M. [Univ. of Virginia, Charlottesville, VA (United States); van der Spoel, David [Science for Life Lab., Stockholm (Sweden); Uppsala Univ., Uppsala (Sweden); Hess, Berk [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Lindahl, Erik [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden)

    2013-02-13

    In this study, molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. As a result, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.

  20. Versatile single-molecule multi-color excitation and detection fluorescence setup for studying biomolecular dynamics

    KAUST Repository

    Sobhy, M. A.; Elshenawy, M. M.; Takahashi, Masateru; Whitman, B. H.; Walter, N. G.; Hamdan, S. M.

    2011-01-01

    Single-molecule fluorescence imaging is at the forefront of tools applied to study biomolecular dynamics both in vitro and in vivo. The ability of the single-molecule fluorescence microscope to conduct simultaneous multi-color excitation

  1. Computer simulations for the nano-scale

    International Nuclear Information System (INIS)

    Stich, I.

    2007-01-01

    A review of methods for computations for the nano-scale is presented. The paper should provide a convenient starting point into computations for the nano-scale as well as a more in depth presentation for those already working in the field of atomic/molecular-scale modeling. The argument is divided in chapters covering the methods for description of the (i) electrons, (ii) ions, and (iii) techniques for efficient solving of the underlying equations. A fairly broad view is taken covering the Hartree-Fock approximation, density functional techniques and quantum Monte-Carlo techniques for electrons. The customary quantum chemistry methods, such as post Hartree-Fock techniques, are only briefly mentioned. Description of both classical and quantum ions is presented. The techniques cover Ehrenfest, Born-Oppenheimer, and Car-Parrinello dynamics. The strong and weak points of both principal and technical nature are analyzed. In the second part we introduce a number of applications to demonstrate the different approximations and techniques introduced in the first part. They cover a wide range of applications such as non-simple liquids, surfaces, molecule-surface interactions, applications in nano technology, etc. These more in depth presentations, while certainly not exhaustive, should provide information on technical aspects of the simulations, typical parameters used, and ways of analysis of the huge amounts of data generated in these large-scale supercomputer simulations. (author)

  2. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  3. Ion induced fragmentation of biomolecular systems at low collision energies

    International Nuclear Information System (INIS)

    Bernigaud, V; Adoui, L; Chesnel, J Y; Rangama, J; Huber, B A; Manil, B; Alvarado, F; Bari, S; Hoekstra, R; Postma, J; Schlathoelter, T

    2009-01-01

    In this paper, we present results of different collision experiments between multiply charged ions at low collision energies (in the keV-region) and biomolecular systems. This kind of interaction allows to remove electrons form the biomolecule without transferring a large amount of vibrational excitation energy. Nevertheless, following the ionization of the target, fragmentation of biomolecular species may occur. It is the main objective of this work to study the physical processes involved in the dissociation of highly electronically excited systems. In order to elucidate the intrinsic properties of certain biomolecules (porphyrins and amino acids) we have performed experiments in the gas phase with isolated systems. The obtained results demonstrate the high stability of porphyrins after electron removal. Furthermore, a dependence of the fragmentation pattern produced by multiply charged ions on the isomeric structure of the alanine molecule has been shown. By considering the presence of other surrounding biomolecules (clusters of nucleobases), a strong influence of the environment of the biomolecule on the fragmentation channels and their modification, has been clearly proven. This result is explained, in the thymine and uracil case, by the formation of hydrogen bonds between O and H atoms, which is known to favor planar cluster geometries.

  4. Architecture of transcriptional regulatory circuits is knitted over the topology of bio-molecular interaction networks

    DEFF Research Database (Denmark)

    Soberano de Oliveira, Ana Paula; Patil, Kiran Raosaheb; Nielsen, Jens

    2008-01-01

    is to use the topology of bio-molecular interaction networks in order to constrain the solution space. Such approaches systematically integrate the existing biological knowledge with the 'omics' data. Results: Here we introduce a hypothesis-driven method that integrates bio-molecular network topology......Background: Uncovering the operating principles underlying cellular processes by using 'omics' data is often a difficult task due to the high-dimensionality of the solution space that spans all interactions among the bio-molecules under consideration. A rational way to overcome this problem...... with transcriptome data, thereby allowing the identification of key biological features (Reporter Features) around which transcriptional changes are significantly concentrated. We have combined transcriptome data with different biological networks in order to identify Reporter Gene Ontologies, Reporter Transcription...

  5. Evaluation of the airway of the SimMan full-scale patient simulator

    DEFF Research Database (Denmark)

    Hesselfeldt, R; Kristensen, M S; Rasmussen, L S

    2005-01-01

    SimMan is a full-scale patient simulator, capable of simulating normal and pathological airways. The performance of SimMan has never been critically evaluated.......SimMan is a full-scale patient simulator, capable of simulating normal and pathological airways. The performance of SimMan has never been critically evaluated....

  6. A compact imaging spectroscopic system for biomolecular detections on plasmonic chips.

    Science.gov (United States)

    Lo, Shu-Cheng; Lin, En-Hung; Wei, Pei-Kuen; Tsai, Wan-Shao

    2016-10-17

    In this study, we demonstrate a compact imaging spectroscopic system for high-throughput detection of biomolecular interactions on plasmonic chips, based on a curved grating as the key element of light diffraction and light focusing. Both the curved grating and the plasmonic chips are fabricated on flexible plastic substrates using a gas-assisted thermal-embossing method. A fiber-coupled broadband light source and a camera are included in the system. Spectral resolution within 1 nm is achieved in sensing environmental index solutions and protein bindings. The detected sensitivities of the plasmonic chip are comparable with a commercial spectrometer. An extra one-dimensional scanning stage enables high-throughput detection of protein binding on a designed plasmonic chip consisting of several nanoslit arrays with different periods. The detected resonance wavelengths match well with the grating equation under an air environment. Wavelength shifts between 1 and 9 nm are detected for antigens of various concentrations binding with antibodies. A simple, mass-productive and cost-effective method has been demonstrated on the imaging spectroscopic system for real-time, label-free, highly sensitive and high-throughput screening of biomolecular interactions.

  7. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    Science.gov (United States)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  8. Rapid prototyping of nanofluidic systems using size-reduced electrospun nanofibers for biomolecular analysis.

    Science.gov (United States)

    Park, Seung-Min; Huh, Yun Suk; Szeto, Kylan; Joe, Daniel J; Kameoka, Jun; Coates, Geoffrey W; Edel, Joshua B; Erickson, David; Craighead, Harold G

    2010-11-05

    Biomolecular transport in nanofluidic confinement offers various means to investigate the behavior of biomolecules in their native aqueous environments, and to develop tools for diverse single-molecule manipulations. Recently, a number of simple nanofluidic fabrication techniques has been demonstrated that utilize electrospun nanofibers as a backbone structure. These techniques are limited by the arbitrary dimension of the resulting nanochannels due to the random nature of electrospinning. Here, a new method for fabricating nanofluidic systems from size-reduced electrospun nanofibers is reported and demonstrated. As it is demonstrated, this method uses the scanned electrospinning technique for generation of oriented sacrificial nanofibers and exposes these nanofibers to harsh, but isotropic etching/heating environments to reduce their cross-sectional dimension. The creation of various nanofluidic systems as small as 20 nm is demonstrated, and practical examples of single biomolecular handling, such as DNA elongation in nanochannels and fluorescence correlation spectroscopic analysis of biomolecules passing through nanochannels, are provided.

  9. Design of an embedded inverse-feedforward biomolecular tracking controller for enzymatic reaction processes.

    Science.gov (United States)

    Foo, Mathias; Kim, Jongrae; Sawlekar, Rucha; Bates, Declan G

    2017-04-06

    Feedback control is widely used in chemical engineering to improve the performance and robustness of chemical processes. Feedback controllers require a 'subtractor' that is able to compute the error between the process output and the reference signal. In the case of embedded biomolecular control circuits, subtractors designed using standard chemical reaction network theory can only realise one-sided subtraction, rendering standard controller design approaches inadequate. Here, we show how a biomolecular controller that allows tracking of required changes in the outputs of enzymatic reaction processes can be designed and implemented within the framework of chemical reaction network theory. The controller architecture employs an inversion-based feedforward controller that compensates for the limitations of the one-sided subtractor that generates the error signals for a feedback controller. The proposed approach requires significantly fewer chemical reactions to implement than alternative designs, and should have wide applicability throughout the fields of synthetic biology and biological engineering.

  10. Microfluidic Devices for Studying Biomolecular Interactions

    Science.gov (United States)

    Wilson, Wilbur W.; Garcia, Carlos d.; Henry, Charles S.

    2006-01-01

    Microfluidic devices for monitoring biomolecular interactions have been invented. These devices are basically highly miniaturized liquid-chromatography columns. They are intended to be prototypes of miniature analytical devices of the laboratory on a chip type that could be fabricated rapidly and inexpensively and that, because of their small sizes, would yield analytical results from very small amounts of expensive analytes (typically, proteins). Other advantages to be gained by this scaling down of liquid-chromatography columns may include increases in resolution and speed, decreases in the consumption of reagents, and the possibility of performing multiple simultaneous and highly integrated analyses by use of multiple devices of this type, each possibly containing multiple parallel analytical microchannels. The principle of operation is the same as that of a macroscopic liquid-chromatography column: The column is a channel packed with particles, upon which are immobilized molecules of the protein of interest (or one of the proteins of interest if there are more than one). Starting at a known time, a solution or suspension containing molecules of the protein or other substance of interest is pumped into the channel at its inlet. The liquid emerging from the outlet of the channel is monitored to detect the molecules of the dissolved or suspended substance(s). The time that it takes these molecules to flow from the inlet to the outlet is a measure of the degree of interaction between the immobilized and the dissolved or suspended molecules. Depending on the precise natures of the molecules, this measure can be used for diverse purposes: examples include screening for solution conditions that favor crystallization of proteins, screening for interactions between drugs and proteins, and determining the functions of biomolecules.

  11. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  12. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    Science.gov (United States)

    Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.

    2018-03-01

    This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields

  13. Groundwater flow simulation on local scale. Setting boundary conditions of groundwater flow simulation on site scale model in the step 4

    International Nuclear Information System (INIS)

    Onoe, Hironori; Saegusa, Hiromitsu; Ohyama, Takuya

    2007-03-01

    Japan Atomic Energy Agency has been conducting a wide range of geoscientific research in order to build a foundation for multidisciplinary studies of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. Ongoing geoscientific research programs include the Regional Hydrogeological Study (RHS) project and Mizunami Underground Research Laboratory (MIU) project in the Tono region, Gifu Prefecture. The main goal of these projects is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological at several spatial scales. The RHS project is a Local scale study for understanding the groundwater flow system from the recharge area to the discharge area. The Surface-based Investigation Phase of the MIU project is a Site scale study for understanding the deep geological environment immediately surrounding the MIU construction site using a multiphase, iterative approach. In this study, the hydrogeological modeling and groundwater flow simulation on Local scale were carried out in order to set boundary conditions of the Site scale model based on the data obtained from surface-based investigations in the Step4 in Site scale of the MIU project. As a result of the study, boundary conditions for groundwater flow simulation on the Site scale model of the Step4 could be obtained. (author)

  14. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  15. Instrumental biosensors: new perspectives for the analysis of biomolecular interactions.

    Science.gov (United States)

    Nice, E C; Catimel, B

    1999-04-01

    The use of instrumental biosensors in basic research to measure biomolecular interactions in real time is increasing exponentially. Applications include protein-protein, protein-peptide, DNA-protein, DNA-DNA, and lipid-protein interactions. Such techniques have been applied to, for example, antibody-antigen, receptor-ligand, signal transduction, and nuclear receptor studies. This review outlines the principles of two of the most commonly used instruments and highlights specific operating parameters that will assist in optimising experimental design, data generation, and analysis.

  16. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  17. NMR paves the way for atomic level descriptions of sparsely populated, transiently formed biomolecular conformers.

    Science.gov (United States)

    Sekhar, Ashok; Kay, Lewis E

    2013-08-06

    The importance of dynamics to biomolecular function is becoming increasingly clear. A description of the structure-function relationship must, therefore, include the role of motion, requiring a shift in paradigm from focus on a single static 3D picture to one where a given biomolecule is considered in terms of an ensemble of interconverting conformers, each with potentially diverse activities. In this Perspective, we describe how recent developments in solution NMR spectroscopy facilitate atomic resolution studies of sparsely populated, transiently formed biomolecular conformations that exchange with the native state. Examples of how this methodology is applied to protein folding and misfolding, ligand binding, and molecular recognition are provided as a means of illustrating both the power of the new techniques and the significant roles that conformationally excited protein states play in biology.

  18. Insights Into the Bifunctional Aphidicolan-16-ß-ol Synthase Through Rapid Biomolecular Modeling Approaches

    Directory of Open Access Journals (Sweden)

    Max Hirte

    2018-04-01

    Full Text Available Diterpene synthases catalyze complex, multi-step C-C coupling reactions thereby converting the universal, aliphatic precursor geranylgeranyl diphosphate into diverse olefinic macrocylces that form the basis for the structural diversity of the diterpene natural product family. Since catalytically relevant crystal structures of diterpene synthases are scarce, homology based biomolecular modeling techniques offer an alternative route to study the enzyme's reaction mechanism. However, precise identification of catalytically relevant amino acids is challenging since these models require careful preparation and refinement techniques prior to substrate docking studies. Targeted amino acid substitutions in this protein class can initiate premature quenching of the carbocation centered reaction cascade. The structural characterization of those alternative cyclization products allows for elucidation of the cyclization reaction cascade and provides a new source for complex macrocyclic synthons. In this study, new insights into structure and function of the fungal, bifunctional Aphidicolan-16-ß-ol synthase were achieved using a simplified biomolecular modeling strategy. The applied refinement methodologies could rapidly generate a reliable protein-ligand complex, which provides for an accurate in silico identification of catalytically relevant amino acids. Guided by our modeling data, ACS mutations lead to the identification of the catalytically relevant ACS amino acid network I626, T657, Y658, A786, F789, and Y923. Moreover, the ACS amino acid substitutions Y658L and D661A resulted in a premature termination of the cyclization reaction cascade en-route from syn-copalyl diphosphate to Aphidicolan-16-ß-ol. Both ACS mutants generated the diterpene macrocycle syn-copalol and a minor, non-hydroxylated labdane related diterpene, respectively. Our biomolecular modeling and mutational studies suggest that the ACS substrate cyclization occurs in a spatially

  19. Insights Into the Bifunctional Aphidicolan-16-ß-ol Synthase Through Rapid Biomolecular Modeling Approaches.

    Science.gov (United States)

    Hirte, Max; Meese, Nicolas; Mertz, Michael; Fuchs, Monika; Brück, Thomas B

    2018-01-01

    Diterpene synthases catalyze complex, multi-step C-C coupling reactions thereby converting the universal, aliphatic precursor geranylgeranyl diphosphate into diverse olefinic macrocylces that form the basis for the structural diversity of the diterpene natural product family. Since catalytically relevant crystal structures of diterpene synthases are scarce, homology based biomolecular modeling techniques offer an alternative route to study the enzyme's reaction mechanism. However, precise identification of catalytically relevant amino acids is challenging since these models require careful preparation and refinement techniques prior to substrate docking studies. Targeted amino acid substitutions in this protein class can initiate premature quenching of the carbocation centered reaction cascade. The structural characterization of those alternative cyclization products allows for elucidation of the cyclization reaction cascade and provides a new source for complex macrocyclic synthons. In this study, new insights into structure and function of the fungal, bifunctional Aphidicolan-16-ß-ol synthase were achieved using a simplified biomolecular modeling strategy. The applied refinement methodologies could rapidly generate a reliable protein-ligand complex, which provides for an accurate in silico identification of catalytically relevant amino acids. Guided by our modeling data, ACS mutations lead to the identification of the catalytically relevant ACS amino acid network I626, T657, Y658, A786, F789, and Y923. Moreover, the ACS amino acid substitutions Y658L and D661A resulted in a premature termination of the cyclization reaction cascade en-route from syn-copalyl diphosphate to Aphidicolan-16-ß-ol. Both ACS mutants generated the diterpene macrocycle syn-copalol and a minor, non-hydroxylated labdane related diterpene, respectively. Our biomolecular modeling and mutational studies suggest that the ACS substrate cyclization occurs in a spatially restricted location of

  20. Insights into the bifunctional Aphidicolan-16-ß-ol synthase through rapid biomolecular modelling approaches

    Science.gov (United States)

    Hirte, Max; Meese, Nicolas; Mertz, Michael; Fuchs, Monika; Brück, Thomas B.

    2018-04-01

    Diterpene synthases catalyze complex, multi-step C-C coupling reactions thereby converting the universal, aliphatic precursor geranylgeranyl diphosphate into diverse olefinic macrocylces that form the basis for the structural diversity of the diterpene natural product family. Since catalytically relevant crystal structures of diterpene synthases are scarce, homology based biomolecular modelling techniques offer an alternative route to study the enzyme’s reaction mechanism. However, precise identification of catalytically relevant amino acids is challenging since these models require careful preparation and refinement techniques prior to substrate docking studies. Targeted amino acid substitutions in this protein class can initiate premature quenching of the carbocation centered reaction cascade. The structural characterization of those alternative cyclization products allows for elucidation of the cyclization reaction cascade and provides a new source for complex macrocyclic synthons. In this study, new insights into structure and function of the fungal, bifunctional Aphidicolan-16-ß-ol synthase were achieved using a simplified biomolecular modelling strategy. The applied refinement methodologies could rapidly generate a reliable protein-ligand complex, which provides for an accurate in silico identification of catalytically relevant amino acids. Guided by our modelling data, ACS mutations lead to the identification of the catalytically relevant ACS amino acid network I626, T657, Y658, A786, F789 and Y923. Moreover, the ACS amino acid substitutions Y658L and D661A resulted in a premature termination of the cyclization reaction cascade en-route from syn-copalyl diphosphate to Aphidicolan-16-ß-ol. Both ACS mutants generated the diterpene macrocycle syn-copalol and a minor, non-hydroxylated labdane related diterpene, respectively. Our biomolecular modelling and mutational studies suggest that the ACS substrate cyclization occurs in a spatially restricted location

  1. EXTENDED SCALING LAWS IN NUMERICAL SIMULATIONS OF MAGNETOHYDRODYNAMIC TURBULENCE

    International Nuclear Information System (INIS)

    Mason, Joanne; Cattaneo, Fausto; Perez, Jean Carlos; Boldyrev, Stanislav

    2011-01-01

    Magnetized turbulence is ubiquitous in astrophysical systems, where it notoriously spans a broad range of spatial scales. Phenomenological theories of MHD turbulence describe the self-similar dynamics of turbulent fluctuations in the inertial range of scales. Numerical simulations serve to guide and test these theories. However, the computational power that is currently available restricts the simulations to Reynolds numbers that are significantly smaller than those in astrophysical settings. In order to increase computational efficiency and, therefore, probe a larger range of scales, one often takes into account the fundamental anisotropy of field-guided MHD turbulence, with gradients being much slower in the field-parallel direction. The simulations are then optimized by employing the reduced MHD equations and relaxing the field-parallel numerical resolution. In this work we explore a different possibility. We propose that there exist certain quantities that are remarkably stable with respect to the Reynolds number. As an illustration, we study the alignment angle between the magnetic and velocity fluctuations in MHD turbulence, measured as the ratio of two specially constructed structure functions. We find that the scaling of this ratio can be extended surprisingly well into the regime of relatively low Reynolds number. However, the extended scaling easily becomes spoiled when the dissipation range in the simulations is underresolved. Thus, taking the numerical optimization methods too far can lead to spurious numerical effects and erroneous representation of the physics of MHD turbulence, which in turn can affect our ability to identify correctly the physical mechanisms that are operating in astrophysical systems.

  2. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  3. Biomolecular surface construction by PDE transform.

    Science.gov (United States)

    Zheng, Qiong; Yang, Siyang; Wei, Guo-Wei

    2012-03-01

    This work proposes a new framework for the surface generation based on the partial differential equation (PDE) transform. The PDE transform has recently been introduced as a general approach for the mode decomposition of images, signals, and data. It relies on the use of arbitrarily high-order PDEs to achieve the time-frequency localization, control the spectral distribution, and regulate the spatial resolution. The present work provides a new variational derivation of high-order PDE transforms. The fast Fourier transform is utilized to accomplish the PDE transform so as to avoid stringent stability constraints in solving high-order PDEs. As a consequence, the time integration of high-order PDEs can be done efficiently with the fast Fourier transform. The present approach is validated with a variety of test examples in two-dimensional and three-dimensional settings. We explore the impact of the PDE transform parameters, such as the PDE order and propagation time, on the quality of resulting surfaces. Additionally, we utilize a set of 10 proteins to compare the computational efficiency of the present surface generation method and a standard approach in Cartesian meshes. Moreover, we analyze the present method by examining some benchmark indicators of biomolecular surface, that is, surface area, surface-enclosed volume, solvation free energy, and surface electrostatic potential. A test set of 13 protein molecules is used in the present investigation. The electrostatic analysis is carried out via the Poisson-Boltzmann equation model. To further demonstrate the utility of the present PDE transform-based surface method, we solve the Poisson-Nernst-Planck equations with a PDE transform surface of a protein. Second-order convergence is observed for the electrostatic potential and concentrations. Finally, to test the capability and efficiency of the present PDE transform-based surface generation method, we apply it to the construction of an excessively large biomolecule, a

  4. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed

    2016-06-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation domain and solve the governing equations. To speed up the MC simulations, we implemented a recently developed scheme that quickly generates MC Markov chains out of pre-computed ones, based on the reweighting and reconstruction algorithm. This method astonishingly reduces the required computational times by MC simulations from hours to seconds. To demonstrate the strength of the proposed coupling in terms of computational time efficiency and numerical accuracy in fluid properties, various numerical experiments covering different compressible single-phase flow scenarios were conducted. The novelty in the introduced scheme is in allowing an efficient coupling of the molecular scale and the Darcy\\'s one in reservoir simulators. This leads to an accurate description of thermodynamic behavior of the simulated reservoir fluids; consequently enhancing the confidence in the flow predictions in porous media.

  5. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed; Kadoura, Ahmad Salim; Sun, Shuyu

    2016-01-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation

  6. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit.

    Science.gov (United States)

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R; Smith, Jeremy C; Kasson, Peter M; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-04-01

    Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. GROMACS is an open source and free software available from http://www.gromacs.org. Supplementary data are available at Bioinformatics online.

  7. A new approach to implement absorbing boundary condition in biomolecular electrostatics.

    Science.gov (United States)

    Goni, Md Osman

    2013-01-01

    This paper discusses a novel approach to employ the absorbing boundary condition in conjunction with the finite-element method (FEM) in biomolecular electrostatics. The introduction of Bayliss-Turkel absorbing boundary operators in electromagnetic scattering problem has been incorporated by few researchers. However, in the area of biomolecular electrostatics, this boundary condition has not been investigated yet. The objective of this paper is twofold. First, to solve nonlinear Poisson-Boltzmann equation using Newton's method and second, to find an efficient and acceptable solution with minimum number of unknowns. In this work, a Galerkin finite-element formulation is used along with a Bayliss-Turkel absorbing boundary operator that explicitly accounts for the open field problem by mapping the Sommerfeld radiation condition from the far field to near field. While the Bayliss-Turkel condition works well when the artificial boundary is far from the scatterer, an acceptable tolerance of error can be achieved with the second order operator. Numerical results on test case with simple sphere show that the treatment is able to reach the same level of accuracy achieved by the analytical method while using a lower grid density. Bayliss-Turkel absorbing boundary condition (BTABC) combined with the FEM converges to the exact solution of scattering problems to within discretization error.

  8. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  9. Development and validation of the Simulation Learning Effectiveness Scale for nursing students.

    Science.gov (United States)

    Pai, Hsiang-Chu

    2016-11-01

    To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students

  10. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    Science.gov (United States)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  11. Constructing Markov State Models to elucidate the functional conformational changes of complex biomolecules

    KAUST Repository

    Wang, Wei; Cao, Siqin; Zhu, Lizhe; Huang, Xuhui

    2017-01-01

    bioengineering applications and rational drug design. Constructing Markov State Models (MSMs) based on large-scale molecular dynamics simulations has emerged as a powerful approach to model functional conformational changes of the biomolecular system

  12. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  13. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  14. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  15. Dislocations and elementary processes of plasticity in FCC metals: atomic scale simulations

    International Nuclear Information System (INIS)

    Rodney, D.

    2000-01-01

    We present atomic-scale simulations of two elementary processes of FCC crystal plasticity. The first study consists in the simulation by molecular dynamics, in a nickel crystal, of the interactions between an edge dislocation and glissile interstitial loops of the type that form under irradiation in displacement cascades. The simulations show various atomic-scale interaction processes leading to the absorption and drag of the loops by the dislocation. These reactions certainly contribute to the formation of the 'clear bands' observed in deformed irradiated materials. The simulations also allow to study quantitatively the role of the glissile loops in irradiation hardening. In particular, dislocation unpinning stresses for certain pinning mechanisms are evaluated from the simulations. The second study consists first in the generalization in three dimensions of the quasi-continuum method (QCM), a multi-scale simulation method which couples atomistic techniques and the finite element method. In the QCM, regions close to dislocation cores are simulated at the atomic-scale while the rest of the crystal is simulated with a lower resolution by means of a discretization of the displacement fields using the finite element method. The QCM is then tested on the simulation of the formation and breaking of dislocation junctions in an aluminum crystal. Comparison of the simulations with an elastic model of dislocation junctions shows that the structure and strength of the junctions are dominated by elastic line tension effects, as is assumed in classical theories. (author)

  16. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  17. Large-eddy simulation with accurate implicit subgrid-scale diffusion

    NARCIS (Netherlands)

    B. Koren (Barry); C. Beets

    1996-01-01

    textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is

  18. Simulating Biomass Fast Pyrolysis at the Single Particle Scale

    Energy Technology Data Exchange (ETDEWEB)

    Ciesielski, Peter [National Renewable Energy Laboratory (NREL); Wiggins, Gavin [ORNL; Daw, C Stuart [ORNL; Jakes, Joseph E. [U.S. Forest Service, Forest Products Laboratory, Madison, Wisconsin, USA

    2017-07-01

    Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level of structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.

  19. Interacting with the biomolecular solvent accessible surface via a haptic feedback device

    Directory of Open Access Journals (Sweden)

    Hayward Steven

    2009-10-01

    Full Text Available Abstract Background From the 1950s computer based renderings of molecules have been produced to aid researchers in their understanding of biomolecular structure and function. A major consideration for any molecular graphics software is the ability to visualise the three dimensional structure of the molecule. Traditionally, this was accomplished via stereoscopic pairs of images and later realised with three dimensional display technologies. Using a haptic feedback device in combination with molecular graphics has the potential to enhance three dimensional visualisation. Although haptic feedback devices have been used to feel the interaction forces during molecular docking they have not been used explicitly as an aid to visualisation. Results A haptic rendering application for biomolecular visualisation has been developed that allows the user to gain three-dimensional awareness of the shape of a biomolecule. By using a water molecule as the probe, modelled as an oxygen atom having hard-sphere interactions with the biomolecule, the process of exploration has the further benefit of being able to determine regions on the molecular surface that are accessible to the solvent. This gives insight into how awkward it is for a water molecule to gain access to or escape from channels and cavities, indicating possible entropic bottlenecks. In the case of liver alcohol dehydrogenase bound to the inhibitor SAD, it was found that there is a channel just wide enough for a single water molecule to pass through. Placing the probe coincident with crystallographic water molecules suggests that they are sometimes located within small pockets that provide a sterically stable environment irrespective of hydrogen bonding considerations. Conclusion By using the software, named HaptiMol ISAS (available from http://www.haptimol.co.uk, one can explore the accessible surface of biomolecules using a three-dimensional input device to gain insights into the shape and water

  20. Studies of the charge instabilities in the complex nano-objects: clusters and bio-molecular systems

    International Nuclear Information System (INIS)

    Manil, B.

    2007-11-01

    For the last 6 years, my main research works focused on i) the Coulomb instabilities and the fragmentation processes of fullerenes and clusters of fullerenes ii) the stability and the reactivity of complex bio-molecular systems. Concerning the clusters of fullerenes, which are van der Waals type clusters, we have shown that the multiply charged species, obtained in collisions with slow highly charged ions, keep their structural properties but become very good electric conductor. In another hand, with the aim to understand the role of the biologic environment at the molecular scale in the irradiation damage of complex biomolecules, we have studied the charge stabilities of clusters of small biomolecules and the dissociation processes of larger nano-hydrated biomolecules. Theses studies have shown that first, specific molecular recognition mechanisms continue to exist in gas phase and secondly, a small and very simple biochemical environment is enough to change the dynamics of instabilities. (author)

  1. A new strategy for imaging biomolecular events through interactions between liquid crystals and oil-in-water emulsions.

    Science.gov (United States)

    Hu, Qiong-Zheng; Jang, Chang-Hyun

    2012-11-21

    In this study, we demonstrate a new strategy to image biomolecular events through interactions between liquid crystals (LCs) and oil-in-water emulsions. The optical response had a dark appearance when a nematic LC, 4-cyano-4'-pentylbiphenyl (5CB), is in contact with emulsion droplets of glyceryl trioleate (GT). In contrast, the optical response had a bright appearance when 5CB is in contact with GT emulsions decorated with surfactants such as sodium oleate. Since lipase can hydrolyze GT and produce oleic acid, the optical response also displays a bright appearance after 5CB has been in contact with a mixture of lipase and GT emulsions. These results indicate the feasibility of monitoring biomolecular events through interactions between LCs and oil-in-water emulsions.

  2. Toward multi-scale simulation of reconnection phenomena in space plasma

    Science.gov (United States)

    Den, M.; Horiuchi, R.; Usami, S.; Tanaka, T.; Ogawa, T.; Ohtani, H.

    2013-12-01

    Magnetic reconnection is considered to play an important role in space phenomena such as substorm in the Earth's magnetosphere. It is well known that magnetic reconnection is controlled by microscopic kinetic mechanism. Frozen-in condition is broken due to particle kinetic effects and collisionless reconnection is triggered when current sheet is compressed as thin as ion kinetic scales under the influence of external driving flow. On the other hand configuration of the magnetic field leading to formation of diffusion region is determined in macroscopic scale and topological change after reconnection is also expressed in macroscopic scale. Thus magnetic reconnection is typical multi-scale phenomenon and microscopic and macroscopic physics are strongly coupled. Recently Horiuchi et al. developed an effective resistivity model based on particle-in-cell (PIC) simulation results obtained in study of collisionless driven reconnection and applied to a global magnetohydrodynamics (MHD) simulation of substorm in the Earth's magnetosphere. They showed reproduction of global behavior in substrom such as dipolarization and flux rope formation by global three dimensional MHD simulation. Usami et al. developed multi-hierarchy simulation model, in which macroscopic and microscopic physics are solved self-consistently and simultaneously. Based on the domain decomposition method, this model consists of three parts: a MHD algorithm for macroscopic global dynamics, a PIC algorithm for microscopic kinetic physics, and an interface algorithm to interlock macro and micro hierarchies. They verified the interface algorithm by simulation of plasma injection flow. In their latest work, this model was applied to collisionless reconnection in an open system and magnetic reconnection was successfully found. In this paper, we describe our approach to clarify multi-scale phenomena and report the current status. Our recent study about extension of the MHD domain to global system is presented. We

  3. Atomistic Simulations of Small-scale Materials Tests of Nuclear Materials

    International Nuclear Information System (INIS)

    Shin, Chan Sun; Jin, Hyung Ha; Kwon, Jun Hyun

    2012-01-01

    Degradation of materials properties under neutron irradiation is one of the key issues affecting the lifetime of nuclear reactors. Evaluating the property changes of materials due to irradiations and understanding the role of microstructural changes on mechanical properties are required for ensuring reliable and safe operation of a nuclear reactor. However, high dose of neuron irradiation capabilities are rather limited and it is difficult to discriminate various factors affecting the property changes of materials. Ion beam irradiation can be used to investigate radiation damage to materials in a controlled way, but has the main limitation of small penetration depth in the length scale of micro meters. Over the past decade, the interest in the investigations of size-dependent mechanical properties has promoted the development of various small-scale materials tests, e.g. nanoindentation and micro/nano-pillar compression tests. Small-scale materials tests can address the issue of the limitation of small penetration depth of ion irradiation. In this paper, we present small-scale materials tests (experiments and simulation) which are applied to study the size and irradiation effects on mechanical properties. We have performed molecular dynamics simulations of nanoindentation and nanopillar compression tests. These atomistic simulations are expected to significantly contribute to the investigation of the fundamental deformation mechanism of small scale irradiated materials

  4. Multiscale simulations of anisotropic particles combining molecular dynamics and Green's function reaction dynamics

    Science.gov (United States)

    Vijaykumar, Adithya; Ouldridge, Thomas E.; ten Wolde, Pieter Rein; Bolhuis, Peter G.

    2017-03-01

    The modeling of complex reaction-diffusion processes in, for instance, cellular biochemical networks or self-assembling soft matter can be tremendously sped up by employing a multiscale algorithm which combines the mesoscopic Green's Function Reaction Dynamics (GFRD) method with explicit stochastic Brownian, Langevin, or deterministic molecular dynamics to treat reactants at the microscopic scale [A. Vijaykumar, P. G. Bolhuis, and P. R. ten Wolde, J. Chem. Phys. 143, 214102 (2015)]. Here we extend this multiscale MD-GFRD approach to include the orientational dynamics that is crucial to describe the anisotropic interactions often prevalent in biomolecular systems. We present the novel algorithm focusing on Brownian dynamics only, although the methodology is generic. We illustrate the novel algorithm using a simple patchy particle model. After validation of the algorithm, we discuss its performance. The rotational Brownian dynamics MD-GFRD multiscale method will open up the possibility for large scale simulations of protein signalling networks.

  5. Micro- and nanodevices integrated with biomolecular probes.

    Science.gov (United States)

    Alapan, Yunus; Icoz, Kutay; Gurkan, Umut A

    2015-12-01

    Understanding how biomolecules, proteins and cells interact with their surroundings and other biological entities has become the fundamental design criterion for most biomedical micro- and nanodevices. Advances in biology, medicine, and nanofabrication technologies complement each other and allow us to engineer new tools based on biomolecules utilized as probes. Engineered micro/nanosystems and biomolecules in nature have remarkably robust compatibility in terms of function, size, and physical properties. This article presents the state of the art in micro- and nanoscale devices designed and fabricated with biomolecular probes as their vital constituents. General design and fabrication concepts are presented and three major platform technologies are highlighted: microcantilevers, micro/nanopillars, and microfluidics. Overview of each technology, typical fabrication details, and application areas are presented by emphasizing significant achievements, current challenges, and future opportunities. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Comparison of scale analysis and numerical simulation for saturated zone convective mixing processes

    International Nuclear Information System (INIS)

    Oldenburg, C.M.

    1998-01-01

    Scale analysis can be used to predict a variety of quantities arising from natural systems where processes are described by partial differential equations. For example, scale analysis can be applied to estimate the effectiveness of convective missing on the dilution of contaminants in groundwater. Scale analysis involves substituting simple quotients for partial derivatives and identifying and equating the dominant terms in an order-of-magnitude sense. For free convection due to sidewall heating of saturated porous media, scale analysis shows that vertical convective velocity in the thermal boundary layer region is proportional to the Rayleigh number, horizontal convective velocity is proportional to the square root of the Rayleigh number, and thermal boundary layer thickness is proportional to the inverse square root of the Rayleigh number. These scale analysis estimates are corroborated by numerical simulations of an idealized system. A scale analysis estimate of mixing time for a tracer mixing by hydrodynamic dispersion in a convection cell also agrees well with numerical simulation for two different Rayleigh numbers. Scale analysis for the heating-from-below scenario produces estimates of maximum velocity one-half as large as the sidewall case. At small values of the Rayleigh number, this estimate is confirmed by numerical simulation. For larger Rayleigh numbers, simulation results suggest maximum velocities are similar to the sidewall heating scenario. In general, agreement between scale analysis estimates and numerical simulation results serves to validate the method of scale analysis. Application is to radioactive repositories

  7. Proceedings of the international advisory committee on 'biomolecular dynamics instrument DNA' and the workshop on 'biomolecular dynamics backscattering spectrometers'

    International Nuclear Information System (INIS)

    Arai, Masatoshi; Aizawa, Kazuya; Nakajima, Kenji; Shibata, Kaoru; Takahashi, Nobuaki

    2008-08-01

    A workshop entitled 'Biomolecular Dynamics Backscattering Spectrometers' was held on February 27th - 29th, 2008 at J-PARC Center, Japan Atomic Energy Agency. This workshop was planned to be held for aiming to realize an innovative neutron backscattering instrument, namely DNA, in the MLF and thus four leading scientists in the field of neutron backscattering instruments were invited as the International Advisory Committee (IAC member: Dr. Dan Neumann (Chair); Prof. Ferenc Mezei; Dr. Hannu Mutka; Dr. Philip Tregenna-Piggott) for DNA from institutes in the United States, France and Switzerland, where backscattering instruments are in-service. It was therefore held in the form of lecture anterior and then in the form of the committee posterior. This report includes the executive summary of the IAC and materials of the presentations in the IAC and the workshop. (author)

  8. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  9. An Improved Scale-Adaptive Simulation Model for Massively Separated Flows

    Directory of Open Access Journals (Sweden)

    Yue Liu

    2018-01-01

    Full Text Available A new hybrid modelling method termed improved scale-adaptive simulation (ISAS is proposed by introducing the von Karman operator into the dissipation term of the turbulence scale equation, proper derivation as well as constant calibration of which is presented, and the typical circular cylinder flow at Re = 3900 is selected for validation. As expected, the proposed ISAS approach with the concept of scale-adaptive appears more efficient than the original SAS method in obtaining a convergent resolution, meanwhile, comparable with DES in visually capturing the fine-scale unsteadiness. Furthermore, the grid sensitivity issue of DES is encouragingly remedied benefiting from the local-adjusted limiter. The ISAS simulation turns out to attractively represent the development of the shear layers and the flow profiles of the recirculation region, and thus, the focused statistical quantities such as the recirculation length and drag coefficient are closer to the available measurements than DES and SAS outputs. In general, the new modelling method, combining the features of DES and SAS concepts, is capable to simulate turbulent structures down to the grid limit in a simple and effective way, which is practically valuable for engineering flows.

  10. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    Science.gov (United States)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  11. Raman spectroscopy detects biomolecular changes associated with nanoencapsulated hesperetin treatment in experimental oral carcinogenesis

    International Nuclear Information System (INIS)

    Gurushankar, K; Gohulkumar, M; Krishnakumar, N; Kumar, Piyush; Murali Krishna, C

    2016-01-01

    Recently it has been shown that Raman spectroscopy possesses great potential in the investigation of biomolecular changes of tumor tissues with therapeutic drug response in a non-invasive and label-free manner. The present study is designed to investigate the antitumor effect of hespertin-loaded nanoparticles (HETNPs) relative to the efficacy of native hesperetin (HET) in modifying the biomolecular changes during 7,12-dimethyl benz(a)anthracene (DMBA)-induced oral carcinogenesis using a Raman spectroscopic technique. Significant differences in the intensity and shape of the Raman spectra between the control and the experimental tissues at 1800–500 cm −1 were observed. Tumor tissues are characterized by an increase in the relative amount of proteins, nucleic acids, tryptophan and phenylalanine and a decrease in the percentage of lipids when compared to the control tissues. Further, oral administration of HET and its nanoparticulates restored the status of the lipids and significantly decreased the levels of protein and nucleic acid content. Treatment with HETNPs showed a more potent antitumor effect than treatment with native HET, which resulted in an overall reduction in the intensity of several biochemical Raman bands in DMBA-induced oral carcinogenesis being observed. Principal component and linear discriminant analysis (PC–LDA), together with leave-one-out cross validation (LOOCV) on Raman spectra yielded diagnostic sensitivities of 100%, 80%, 91.6% and 65% and specificities of 100%, 65%, 60% and 55% for classification of control versus DMBA, DMBA versus DMBA  +  HET, DMBA versus DMBA  +  HETNPs and DMBA  +  HET versus DMBA  +  HETNPs treated tissue groups, respectively. These results further demonstrate that Raman spectroscopy associated with multivariate statistical algorithms could be a valuable tool for developing a comprehensive understanding of the process of biomolecular changes, and could reveal the signatures of the

  12. The HADDOCK2.2 Web Server: User-Friendly Integrative Modeling of Biomolecular Complexes.

    Science.gov (United States)

    van Zundert, G C P; Rodrigues, J P G L M; Trellet, M; Schmitz, C; Kastritis, P L; Karaca, E; Melquiond, A S J; van Dijk, M; de Vries, S J; Bonvin, A M J J

    2016-02-22

    The prediction of the quaternary structure of biomolecular macromolecules is of paramount importance for fundamental understanding of cellular processes and drug design. In the era of integrative structural biology, one way of increasing the accuracy of modeling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. This has been at the core of our information-driven docking approach HADDOCK. We present here the updated version 2.2 of the HADDOCK portal, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface. With well over 6000 registered users and 108,000 jobs served, an increasing fraction of which on grid resources, we hope that this timely upgrade will help the community to solve important biological questions and further advance the field. The HADDOCK2.2 Web server is freely accessible to non-profit users at http://haddock.science.uu.nl/services/HADDOCK2.2. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Coupling switches and oscillators as a means to shape cellular signals in biomolecular systems

    International Nuclear Information System (INIS)

    Zhou, Peipei; Cai, Shuiming; Liu, Zengrong; Chen, Luonan; Wang, Ruiqi

    2013-01-01

    To understand how a complex biomolecular network functions, a decomposition or a reconstruction process of the network is often needed so as to provide new insights into the regulatory mechanisms underlying various dynamical behaviors and also to gain qualitative knowledge of the network. Unfortunately, it seems that there are still no general rules on how to decompose a complex network into simple modules. An alternative resolution is to decompose a complex network into small modules or subsystems with specified functions such as switches and oscillators and then integrate them by analyzing the interactions between them. The main idea of this approach can be illustrated by considering a bidirectionally coupled network in this paper, i.e., coupled Toggle switch and Repressilator, and analyzing the occurrence of various dynamics, although the theoretical principle may hold for a general class of networks. We show that various biomolecular signals can be shaped by regulating the coupling between the subsystems. The approach presented here can be expected to simplify and analyze even more complex biological networks

  14. Coupling switches and oscillators as a means to shape cellular signals in biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Peipei [Institute of Systems Biology, Shanghai University, Shanghai 200444 (China); Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu 212013 (China); Cai, Shuiming [Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu 212013 (China); Liu, Zengrong [Institute of Systems Biology, Shanghai University, Shanghai 200444 (China); Chen, Luonan [Key Laboratory of Systems Biology, SIBS-Novo Nordisk Translational Research Center for PreDiabetes, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031 (China); Collaborative Research Center for Innovative Mathematical Modeling, Institute of Industrial Science, University of Tokyo, Tokyo 153-8505 (Japan); Wang, Ruiqi [Institute of Systems Biology, Shanghai University, Shanghai 200444 (China)

    2013-05-15

    To understand how a complex biomolecular network functions, a decomposition or a reconstruction process of the network is often needed so as to provide new insights into the regulatory mechanisms underlying various dynamical behaviors and also to gain qualitative knowledge of the network. Unfortunately, it seems that there are still no general rules on how to decompose a complex network into simple modules. An alternative resolution is to decompose a complex network into small modules or subsystems with specified functions such as switches and oscillators and then integrate them by analyzing the interactions between them. The main idea of this approach can be illustrated by considering a bidirectionally coupled network in this paper, i.e., coupled Toggle switch and Repressilator, and analyzing the occurrence of various dynamics, although the theoretical principle may hold for a general class of networks. We show that various biomolecular signals can be shaped by regulating the coupling between the subsystems. The approach presented here can be expected to simplify and analyze even more complex biological networks.

  15. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  16. A study on a nano-scale materials simulation using a PC cluster

    International Nuclear Information System (INIS)

    Choi, Deok Kee; Ryu, Han Kyu

    2002-01-01

    Not a few scientists have paid attention to application of molecular dynamics to chemistry, biology and physics. With recent popularity of nano technology, nano-scale analysis has become a major subject in various engineering fields. A underlying nano scale analysis is based on classical molecular theories representing molecular dynamics. Based on Newton's law of motions of particles, the movement of each particles is to be determined by numerical integrations. As the size of computation is closely related with the number of molecules, materials simulation takes up huge amount of computer resources so that it is not until recent days that the application of molecular dynamics to materials simulations draw some attention from many researchers. Thanks to high-performance computers, materials simulation via molecular dynamics looks promising. In this study, a PC cluster consisting of multiple commodity PCs is established and nano scale materials simulations are carried out. Micro-sized crack propagation inside a nano material is displayed by the simulation

  17. Anomalous scaling of structure functions and dynamic constraints on turbulence simulations

    International Nuclear Information System (INIS)

    Yakhot, Victor; Sreenivasan, Katepalli R.

    2006-12-01

    The connection between anomalous scaling of structure functions (intermittency) and numerical methods for turbulence simulations is discussed. It is argued that the computational work for direct numerical simulations (DNS) of fully developed turbulence increases as Re 4 , and not as Re 3 expected from Kolmogorov's theory, where Re is a large-scale Reynolds number. Various relations for the moments of acceleration and velocity derivatives are derived. An infinite set of exact constraints on dynamically consistent subgrid models for Large Eddy Simulations (LES) is derived from the Navier-Stokes equations, and some problems of principle associated with existing LES models are highlighted. (author)

  18. Biomolecular Markers in Cancer of the Tongue

    Directory of Open Access Journals (Sweden)

    Daris Ferrari

    2009-01-01

    Full Text Available The incidence of tongue cancer is increasing worldwide, and its aggressiveness remains high regardless of treatment. Genetic changes and the expression of abnormal proteins have been frequently reported in the case of head and neck cancers, but the little information that has been published concerning tongue tumours is often contradictory. This review will concentrate on the immunohistochemical expression of biomolecular markers and their relationships with clinical behaviour and prognosis. Most of these proteins are associated with nodal stage, tumour progression and metastases, but there is still controversy concerning their impact on disease-free and overall survival, and treatment response. More extensive clinical studies are needed to identify the patterns of molecular alterations and the most reliable predictors in order to develop tailored anti-tumour strategies based on the targeting of hypoxia markers, vascular and lymphangiogenic factors, epidermal growth factor receptors, intracytoplasmatic signalling and apoptosis.

  19. Towards sensitive, high-throughput, biomolecular assays based on fluorescence lifetime

    Science.gov (United States)

    Ioanna Skilitsi, Anastasia; Turko, Timothé; Cianfarani, Damien; Barre, Sophie; Uhring, Wilfried; Hassiepen, Ulrich; Léonard, Jérémie

    2017-09-01

    Time-resolved fluorescence detection for robust sensing of biomolecular interactions is developed by implementing time-correlated single photon counting in high-throughput conditions. Droplet microfluidics is used as a promising platform for the very fast handling of low-volume samples. We illustrate the potential of this very sensitive and cost-effective technology in the context of an enzymatic activity assay based on fluorescently-labeled biomolecules. Fluorescence lifetime detection by time-correlated single photon counting is shown to enable reliable discrimination between positive and negative control samples at a throughput as high as several hundred samples per second.

  20. Small-angle X-ray scattering investigations of biomolecular confinement, loading, and release from liquid-crystalline nanochannel assemblies

    Czech Academy of Sciences Publication Activity Database

    Angelova, A.; Angelov, Borislav; Garamus, V. M.; Couvreur, P.; Lesieur, S.

    2012-01-01

    Roč. 3, č. 3 (2012), s. 445-457 ISSN 1948-7185 Institutional research plan: CEZ:AV0Z40500505 Keywords : nanochannels * biomolecular nanostructures * SAXS Subject RIV: CD - Macromolecular Chemistry Impact factor: 6.585, year: 2012

  1. Integration of biomolecular logic gates with field-effect transducers

    Energy Technology Data Exchange (ETDEWEB)

    Poghossian, A., E-mail: a.poghossian@fz-juelich.de [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Institute of Bio- and Nanosystems, Research Centre Juelich GmbH, D-52425 Juelich (Germany); Malzahn, K. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Abouzar, M.H. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Institute of Bio- and Nanosystems, Research Centre Juelich GmbH, D-52425 Juelich (Germany); Mehndiratta, P. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Katz, E. [Department of Chemistry and Biomolecular Science, NanoBio Laboratory (NABLAB), Clarkson University, Potsdam, NY 13699-5810 (United States); Schoening, M.J. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Institute of Bio- and Nanosystems, Research Centre Juelich GmbH, D-52425 Juelich (Germany)

    2011-11-01

    Highlights: > Enzyme-based AND/OR logic gates are integrated with a capacitive field-effect sensor. > The AND/OR logic gates compose of multi-enzyme system immobilised on sensor surface. > Logic gates were activated by different combinations of chemical inputs (analytes). > The logic output (pH change) produced by the enzymes was read out by the sensor. - Abstract: The integration of biomolecular logic gates with field-effect devices - the basic element of conventional electronic logic gates and computing - is one of the most attractive and promising approaches for the transformation of biomolecular logic principles into macroscopically useable electrical output signals. In this work, capacitive field-effect EIS (electrolyte-insulator-semiconductor) sensors based on a p-Si-SiO{sub 2}-Ta{sub 2}O{sub 5} structure modified with a multi-enzyme membrane have been used for electronic transduction of biochemical signals processed by enzyme-based OR and AND logic gates. The realised OR logic gate composes of two enzymes (glucose oxidase and esterase) and was activated by ethyl butyrate or/and glucose. The AND logic gate composes of three enzymes (invertase, mutarotase and glucose oxidase) and was activated by two chemical input signals: sucrose and dissolved oxygen. The developed integrated enzyme logic gates produce local pH changes at the EIS sensor surface as a result of biochemical reactions activated by different combinations of chemical input signals, while the pH value of the bulk solution remains unchanged. The pH-induced charge changes at the gate-insulator (Ta{sub 2}O{sub 5}) surface of the EIS transducer result in an electronic signal corresponding to the logic output produced by the immobilised enzymes. The logic output signals have been read out by means of a constant-capacitance method.

  2. Integration of biomolecular logic gates with field-effect transducers

    International Nuclear Information System (INIS)

    Poghossian, A.; Malzahn, K.; Abouzar, M.H.; Mehndiratta, P.; Katz, E.; Schoening, M.J.

    2011-01-01

    Highlights: → Enzyme-based AND/OR logic gates are integrated with a capacitive field-effect sensor. → The AND/OR logic gates compose of multi-enzyme system immobilised on sensor surface. → Logic gates were activated by different combinations of chemical inputs (analytes). → The logic output (pH change) produced by the enzymes was read out by the sensor. - Abstract: The integration of biomolecular logic gates with field-effect devices - the basic element of conventional electronic logic gates and computing - is one of the most attractive and promising approaches for the transformation of biomolecular logic principles into macroscopically useable electrical output signals. In this work, capacitive field-effect EIS (electrolyte-insulator-semiconductor) sensors based on a p-Si-SiO 2 -Ta 2 O 5 structure modified with a multi-enzyme membrane have been used for electronic transduction of biochemical signals processed by enzyme-based OR and AND logic gates. The realised OR logic gate composes of two enzymes (glucose oxidase and esterase) and was activated by ethyl butyrate or/and glucose. The AND logic gate composes of three enzymes (invertase, mutarotase and glucose oxidase) and was activated by two chemical input signals: sucrose and dissolved oxygen. The developed integrated enzyme logic gates produce local pH changes at the EIS sensor surface as a result of biochemical reactions activated by different combinations of chemical input signals, while the pH value of the bulk solution remains unchanged. The pH-induced charge changes at the gate-insulator (Ta 2 O 5 ) surface of the EIS transducer result in an electronic signal corresponding to the logic output produced by the immobilised enzymes. The logic output signals have been read out by means of a constant-capacitance method.

  3. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  4. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  5. Microsecond atomic-scale molecular dynamics simulations of polyimides

    NARCIS (Netherlands)

    Lyulin, S.V.; Gurtovenko, A.A.; Larin, S.V.; Nazarychev, V.M.; Lyulin, A.V.

    2013-01-01

    We employ microsecond atomic-scale molecular dynamics simulations to get insight into the structural and thermal properties of heat-resistant bulk polyimides. As electrostatic interactions are essential for the polyimides considered, we propose a two-step equilibration protocol that includes long

  6. Fully kinetic simulations of megajoule-scale dense plasma focus

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, A.; Link, A.; Tang, V.; Halvorson, C.; May, M. [Lawrence Livermore National Laboratory, Livermore California 94550 (United States); Welch, D. [Voss Scientific, LLC, Albuquerque, New Mexico 87108 (United States); Meehan, B. T.; Hagen, E. C. [National Security Technologies, LLC, Las Vegas, Nevada 89030 (United States)

    2014-10-15

    Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 10{sup 12} neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.

  7. Techniques of biomolecular quantification through AMS detection of radiocarbon

    International Nuclear Information System (INIS)

    Vogel, S.J.; Turteltaub, K.W.; Frantz, C.; Felton, J.S.; Gledhill, B.L.

    1992-01-01

    Accelerator mass spectrometry offers a large gain over scintillation counting in sensitivity for detecting radiocarbon in biomolecular tracing. Application of this sensitivity requires new considerations of procedures to extract or isolate the carbon fraction to be quantified, to inventory all carbon in the sample, to prepare graphite from the sample for use in the spectrometer, and to derive a meaningful quantification from the measured isotope ratio. These procedures need to be accomplished without contaminating the sample with radiocarbon, which may be ubiquitous in laboratories and on equipment previously used for higher dose, scintillation experiments. Disposable equipment, materials and surfaces are used to control these contaminations. Quantification of attomole amounts of labeled substances are possible through these techniques

  8. Classical molecular dynamics simulations of fusion and fragmentation in fullerene-fullerene collisions

    International Nuclear Information System (INIS)

    Verkhovtsev, A.; Korol, A.V.; Solovyov, A.V.

    2017-01-01

    We present the results of classical molecular dynamics simulations of collision-induced fusion and fragmentation of C 60 fullerenes, performed by means of the MBN Explorer software package. The simulations provide information on structural differences of the fused compound depending on kinematics of the collision process. The analysis of fragmentation dynamics at different initial conditions shows that the size distributions of produced molecular fragments are peaked for dimers, which is in agreement with a well-established mechanism of C 60 fragmentation via preferential C 2 emission. Atomic trajectories of the colliding particles are analyzed and different fragmentation patterns are observed and discussed. On the basis of the performed simulations, characteristic time of C 2 emission is estimated as a function of collision energy. The results are compared with experimental time-of-flight distributions of molecular fragments and with earlier theoretical studies. Considering the widely explored case study of C 60 -C 60 collisions, we demonstrate broad capabilities of the MBN Explorer software, which can be utilized for studying collisions of a broad variety of nano-scale and bio-molecular systems by means of classical molecular dynamics. (authors)

  9. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui; Hsu, Shu-wei; McNamara, Ann; Keyser, John

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  10. Gyrokinetic Simulations of Solar Wind Turbulence from Ion to Electron Scales

    International Nuclear Information System (INIS)

    Howes, G. G.; TenBarge, J. M.; Dorland, W.; Numata, R.; Quataert, E.; Schekochihin, A. A.; Tatsuno, T.

    2011-01-01

    A three-dimensional, nonlinear gyrokinetic simulation of plasma turbulence resolving scales from the ion to electron gyroradius with a realistic mass ratio is presented, where all damping is provided by resolved physical mechanisms. The resulting energy spectra are quantitatively consistent with a magnetic power spectrum scaling of k -2.8 as observed in in situ spacecraft measurements of the 'dissipation range' of solar wind turbulence. Despite the strongly nonlinear nature of the turbulence, the linear kinetic Alfven wave mode quantitatively describes the polarization of the turbulent fluctuations. The collisional ion heating is measured at subion-Larmor radius scales, which provides evidence of the ion entropy cascade in an electromagnetic turbulence simulation.

  11. Simulation test of PIUS-type reactor with large scale experimental apparatus

    International Nuclear Information System (INIS)

    Tamaki, M.; Tsuji, Y.; Ito, T.; Tasaka, K.; Kukita, Yutaka

    1995-01-01

    A large scale experimental apparatus for simulating the PIUS-type reactor has been constructed keeping the volumetric scaling ratio to the realistic reactor model. Fundamental experiments such as a steady state operation and a pump trip simulation were performed. Experimental results were compared with those obtained by the small scale apparatus in JAERI. We have already reported the effectiveness of the feedback control for the primary loop pump speed (PI control) for the stable operation. In this paper this feedback system is modified and the PID control is introduced. This new system worked well for the operation of the PIUS-type reactor even in a rapid transient condition. (author)

  12. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: a multi-scale simulation analysis.

    Science.gov (United States)

    Knightes, C D; Golden, H E; Journey, C A; Davis, G M; Conrads, P A; Marvin-DiPasquale, M; Brigham, M E; Bradley, P M

    2014-04-01

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km(2)) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km(2) and 25 km(2)) and the encompassing watershed (79 km(2)). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport. Published by Elsevier Ltd.

  13. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  14. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  15. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ; Tang, Y.; Liu, H.; Yoon, Hongkyu; Kang, Qinjun; Joekar Niasar, Vahid; Balhoff, Matthew; Dewers, T.; Tartakovsky, Guzel D.; Leist, Emily AE; Hess, Nancy J.; Perkins, William A.; Rakowski, Cynthia L.; Richmond, Marshall C.; Serkowski, John A.; Werth, Charles J.; Valocchi, Albert J.; Wietsma, Thomas W.; Zhang, Changyong

    2016-08-01

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.

  16. ISAMBARD: an open-source computational environment for biomolecular analysis, modelling and design.

    Science.gov (United States)

    Wood, Christopher W; Heal, Jack W; Thomson, Andrew R; Bartlett, Gail J; Ibarra, Amaurys Á; Brady, R Leo; Sessions, Richard B; Woolfson, Derek N

    2017-10-01

    The rational design of biomolecules is becoming a reality. However, further computational tools are needed to facilitate and accelerate this, and to make it accessible to more users. Here we introduce ISAMBARD, a tool for structural analysis, model building and rational design of biomolecules. ISAMBARD is open-source, modular, computationally scalable and intuitive to use. These features allow non-experts to explore biomolecular design in silico. ISAMBARD addresses a standing issue in protein design, namely, how to introduce backbone variability in a controlled manner. This is achieved through the generalization of tools for parametric modelling, describing the overall shape of proteins geometrically, and without input from experimentally determined structures. This will allow backbone conformations for entire folds and assemblies not observed in nature to be generated de novo, that is, to access the 'dark matter of protein-fold space'. We anticipate that ISAMBARD will find broad applications in biomolecular design, biotechnology and synthetic biology. A current stable build can be downloaded from the python package index (https://pypi.python.org/pypi/isambard/) with development builds available on GitHub (https://github.com/woolfson-group/) along with documentation, tutorial material and all the scripts used to generate the data described in this paper. d.n.woolfson@bristol.ac.uk or chris.wood@bristol.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  17. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  18. Integrated Spintronic Platforms for Biomolecular Recognition Detection

    Science.gov (United States)

    Martins, V. C.; Cardoso, F. A.; Loureiro, J.; Mercier, M.; Germano, J.; Cardoso, S.; Ferreira, R.; Fonseca, L. P.; Sousa, L.; Piedade, M. S.; Freitas, P. P.

    2008-06-01

    This paper covers recent developments in magnetoresistive based biochip platforms fabricated at INESC-MN, and their application to the detection and quantification of pathogenic waterborn microorganisms in water samples for human consumption. Such platforms are intended to give response to the increasing concern related to microbial contaminated water sources. The presented results concern the development of biological active DNA chips and protein chips and the demonstration of the detection capability of the present platforms. Two platforms are described, one including spintronic sensors only (spin-valve based or magnetic tunnel junction based), and the other, a fully scalable platform where each probe site consists of a MTJ in series with a thin film diode (TFD). Two microfluidic systems are described, for cell separation and concentration, and finally, the read out and control integrated electronics are described, allowing the realization of bioassays with a portable point of care unit. The present platforms already allow the detection of complementary biomolecular target recognition with 1 pM concentration.

  19. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  20. Multi-Scale Simulation of High Energy Density Ionic Liquids

    National Research Council Canada - National Science Library

    Voth, Gregory A

    2007-01-01

    The focus of this AFOSR project was the molecular dynamics (MD) simulation of ionic liquid structure, dynamics, and interfacial properties, as well as multi-scale descriptions of these novel liquids (e.g...

  1. Scaling for integral simulation of thermal-hydraulic phenomena in SBWR during LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, M.; Revankar, S.T.; Dowlati, R [Purdue Univ., West Layfayette, IN (United States)] [and others

    1995-09-01

    A scaling study has been conducted for simulation of thermal-hydraulic phenomena in the Simplified Boiling Water Reactor (SBWR) during a loss of coolant accident. The scaling method consists of a three-level scaling approach. The integral system scaling (global scaling or top down approach) consists of two levels, the integral response function scaling which forms the first level, and the control volume and boundary flow scaling which forms the second level. The bottom up approach is carried out by local phenomena scaling which forms the third level scaling. Based on this scaling study the design of the model facility called Purdue University Multi-Dimensional Integral Test Assembly (PUMA) has been carried out. The PUMA facility has 1/4 height and 1/100 area ratio scaling, corresponding to the volume scaling of 1/400. The PUMA power scaling based on the integral scaling is 1/200. The present scaling method predicts that PUMA time scale will be one-half that of the SBWR. The system pressure for PUMA is full scale, therefore, a prototypic pressure is maintained. PUMA is designed to operate at and below 1.03 MPa (150 psi), which allows it to simulate the prototypic SBWR accident conditions below 1.03 MPa (150 psi). The facility includes models for all components of importance.

  2. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  3. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    Science.gov (United States)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  4. Design of an embedded inverse-feedforward biomolecular tracking controller for enzymatic reaction processes

    OpenAIRE

    Foo, Mathias; Kim, Jongrae; Sawlekar, Rucha; Bates, Declan G.

    2017-01-01

    Feedback control is widely used in chemical engineering to improve the performance and robustness of chemical processes. Feedback controllers require a ‘subtractor’ that is able to compute the error between the process output and the reference signal. In the case of embedded biomolecular control circuits, subtractors designed using standard chemical reaction network theory can only realise one-sided subtraction, rendering standard controller design approaches inadequate. Here, we show how a b...

  5. Introduction to a Protein Interaction System Used for Quantitative Evaluation of Biomolecular Interactions

    OpenAIRE

    Yamniuk, Aaron

    2013-01-01

    A central goal of molecular biology is the determination of biomolecular function. This comes largely from a knowledge of the non-covalent interactions that biological small and macro-molecules experience. The fundamental mission of the Molecular Interactions Research Group (MIRG) of the ABRF is to show how solution biophysical tools are used to quantitatively characterize molecular interactions, and to educate the ABRF members and scientific community on the utility and limitations of core t...

  6. Review of MEMS differential scanning calorimetry for biomolecular study

    Science.gov (United States)

    Yu, Shifeng; Wang, Shuyu; Lu, Ming; Zuo, Lei

    2017-12-01

    Differential scanning calorimetry (DSC) is one of the few techniques that allow direct determination of enthalpy values for binding reactions and conformational transitions in biomolecules. It provides the thermodynamics information of the biomolecules which consists of Gibbs free energy, enthalpy and entropy in a straightforward manner that enables deep understanding of the structure function relationship in biomolecules such as the folding/unfolding of protein and DNA, and ligand bindings. This review provides an up to date overview of the applications of DSC in biomolecular study such as the bovine serum albumin denaturation study, the relationship between the melting point of lysozyme and the scanning rate. We also introduce the recent advances of the development of micro-electro-mechanic-system (MEMS) based DSCs.

  7. A Group Simulation of the Development of the Geologic Time Scale.

    Science.gov (United States)

    Bennington, J. Bret

    2000-01-01

    Explains how to demonstrate to students that the relative dating of rock layers is redundant. Uses two column diagrams to simulate stratigraphic sequences from two different geological time scales and asks students to complete the time scale. (YDS)

  8. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I. (University of California, San Diego); Winey, J. Michael (Washington State University); Gupta, Yogendra Mohan (Washington State University); Lane, J. Matthew D.; Ditmire, Todd (University of Texas at Austin); Quevedo, Hernan J. (University of Texas at Austin)

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  9. Pore-Scale Simulation for Predicting Material Transport Through Porous Media

    International Nuclear Information System (INIS)

    Goichi Itoh; Jinya Nakamura; Koji Kono; Tadashi Watanabe; Hirotada Ohashi; Yu Chen; Shinya Nagasaki

    2002-01-01

    Microscopic models of real-coded lattice gas automata (RLG) method with a special boundary condition and lattice Boltzmann method (LBM) are developed for simulating three-dimensional fluid dynamics in complex geometry. Those models enable us to simulate pore-scale fluid dynamics that is an essential part for predicting material transport in porous media precisely. For large-scale simulation of porous media with high resolution, the RLG and LBM programs are designed for parallel computation. Simulation results of porous media flow by the LBM with different pressure gradient conditions show quantitative agreements with macroscopic relations of Darcy's law and Kozeny-Carman equation. As for the efficiency of parallel computing, a standard parallel computation by using MPI (Message Passing Interface) is compared with the hybrid parallel computation of MPI-node parallel technique. The benchmark tests conclude that in case of using large number of computing node, the parallel performance declines due to increase of data communication between nodes and the hybrid parallel computation totally shows better performance in comparison with the standard parallel computation. (authors)

  10. Fully predictive simulation of real-scale cable tray fire based on small-scale laboratory experiments

    Energy Technology Data Exchange (ETDEWEB)

    Beji, Tarek; Merci, Bart [Ghent Univ. (Belgium). Dept. of Flow, Heat and Combustion Mechanics; Bonte, Frederick [Bel V, Brussels (Belgium)

    2015-12-15

    This paper presents a computational fluid dynamics (CFD)-based modelling strategy for real-scale cable tray fires. The challenge was to perform fully predictive simulations (that could be called 'blind' simulations) using solely information from laboratory-scale experiments, in addition to the geometrical arrangement of the cables. The results of the latter experiments were used (1) to construct the fuel molecule and the chemical reaction for combustion, and (2) to estimate the overall pyrolysis and burning behaviour. More particularly, the strategy regarding the second point consists of adopting a surface-based pyrolysis model. Since the burning behaviour of each cable could not be tracked individually (due to computational constraints), 'groups' of cables were modelled with an overall cable surface area equal to the actual value. The results obtained for one large-scale test (a stack of five horizontal trays) are quite encouraging, especially for the peak Heat Release Rate (HRR) that was predicted with a relative deviation of 3 %. The time to reach the peak is however overestimated by 4.7 min (i.e. 94 %). Also, the fire duration is overestimated by 5 min (i.e. 24 %). These discrepancies are mainly attributed to differences in the HRRPUA (heat release rate per unit area) profiles between the small-scale and large-scale. The latter was calculated by estimating the burning area of cables using video fire analysis (VFA).

  11. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  12. Simulation of flow in dual-scale porous media

    Science.gov (United States)

    Tan, Hua

    Liquid composite molding (LCM) is one of the most effective processes for manufacturing near net-shaped parts from fiber-reinforced polymer composites. The quality of LCM products and the efficiency of the process depend strongly on the wetting of fiber preforms during the mold-filling stage of LCM. Mold-filling simulation is a very effective approach to optimize the LCM process and mold design. Recent studies have shown that the flow modeling for the single-scale fiber preforms (made from random mats) has difficulties in accurately predicting the wetting in the dual-scale fiber preforms (made from woven and stitched fabrics); the latter are characterized by the presence of unsaturated flow created due to two distinct length-scales of pores (i.e., large pores outside the tows and small pores inside the tows) in the same media. In this study, we first develop a method to evaluate the accuracy of the permeability-measuring devices for LCM, and conduct a series of 1-D mold-filling experiments for different dual-scale fabrics. The volume averaging method is then applied to derive the averaged governing equations for modeling the macroscopic flow through the dual-scale fabrics. The two sets of governing equations are coupled with each other through the sink terms representing the absorptions of mass, energy, and species (degree of resin cure) from the global flow by the local fiber tows. The finite element method (FEM) coupled with the control volume method, also known as the finite element/control volume (FE/CV) method, is employed to solve the governing equations and track the moving boundary signifying the moving liquid-front. The numerical computations are conducted with the help of an in-house developed computer program called PORE-FLOW(c). We develop the flux-corrected transport (FCT) based FEM to stabilize the convection-dominated energy and species equations. A fast methodology is proposed to simulate the dual-scale flow under isothermal conditions, where flow

  13. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    Science.gov (United States)

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  14. The development of an industrial-scale fed-batch fermentation simulation.

    Science.gov (United States)

    Goldrick, Stephen; Ştefan, Andrei; Lovett, David; Montague, Gary; Lennox, Barry

    2015-01-10

    This paper describes a simulation of an industrial-scale fed-batch fermentation that can be used as a benchmark in process systems analysis and control studies. The simulation was developed using a mechanistic model and validated using historical data collected from an industrial-scale penicillin fermentation process. Each batch was carried out in a 100,000 L bioreactor that used an industrial strain of Penicillium chrysogenum. The manipulated variables recorded during each batch were used as inputs to the simulator and the predicted outputs were then compared with the on-line and off-line measurements recorded in the real process. The simulator adapted a previously published structured model to describe the penicillin fermentation and extended it to include the main environmental effects of dissolved oxygen, viscosity, temperature, pH and dissolved carbon dioxide. In addition the effects of nitrogen and phenylacetic acid concentrations on the biomass and penicillin production rates were also included. The simulated model predictions of all the on-line and off-line process measurements, including the off-gas analysis, were in good agreement with the batch records. The simulator and industrial process data are available to download at www.industrialpenicillinsimulation.com and can be used to evaluate, study and improve on the current control strategy implemented on this facility. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  15. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  16. Scaling of two-phase flow transients using reduced pressure system and simulant fluid

    International Nuclear Information System (INIS)

    Kocamustafaogullari, G.; Ishii, M.

    1987-01-01

    Scaling criteria for a natural circulation loop under single-phase flow conditions are derived. Based on these criteria, practical applications for designing a scaled-down model are considered. Particular emphasis is placed on scaling a test model at reduced pressure levels compared to a prototype and on fluid-to-fluid scaling. The large number of similarty groups which are to be matched between modell and prototype makes the design of a scale model a challenging tasks. The present study demonstrates a new approach to this clasical problen using two-phase flow scaling parameters. It indicates that a real time scaling is not a practical solution and a scaled-down model should have an accelerated (shortened) time scale. An important result is the proposed new scaling methodology for simulating pressure transients. It is obtained by considerung the changes of the fluid property groups which appear within the two-phase similarity parameters and the single-phase to two-phase flow transition prameters. Sample calculations are performed for modeling two-phase flow transients of a high pressure water system by a low-pressure water system or a Freon system. It is shown that modeling is possible for both cases for simulation pressure transients. However, simulation of phase change transitions is not possible by a reduced pressure water system without distortion in either power or time. (orig.)

  17. Multiscale methods framework: self-consistent coupling of molecular theory of solvation with quantum chemistry, molecular simulations, and dissipative particle dynamics.

    Science.gov (United States)

    Kovalenko, Andriy; Gusarov, Sergey

    2018-01-31

    In this work, we will address different aspects of self-consistent field coupling of computational chemistry methods at different time and length scales in modern materials and biomolecular science. Multiscale methods framework yields dramatically improved accuracy, efficiency, and applicability by coupling models and methods on different scales. This field benefits many areas of research and applications by providing fundamental understanding and predictions. It could also play a particular role in commercialization by guiding new developments and by allowing quick evaluation of prospective research projects. We employ molecular theory of solvation which allows us to accurately introduce the effect of the environment on complex nano-, macro-, and biomolecular systems. The uniqueness of this method is that it can be naturally coupled with the whole range of computational chemistry approaches, including QM, MM, and coarse graining.

  18. Multi-Scale Initial Conditions For Cosmological Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Oliver; /KIPAC, Menlo Park; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg

    2011-11-04

    We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

  19. Laboratory-scale simulations with hydrated lime and organic ...

    African Journals Online (AJOL)

    Laboratory-scale simulations with hydrated lime and organic polymer to evaluate the effect of pre-chlorination on motile Ceratium hirundinella cells during ... When organic material is released from algal cells as a result of physical-chemical impacts on the cells, it may result in tasteand odour-related problems or the ...

  20. Numerical simulation of a small-scale biomass boiler

    International Nuclear Information System (INIS)

    Collazo, J.; Porteiro, J.; Míguez, J.L.; Granada, E.; Gómez, M.A.

    2012-01-01

    Highlights: ► Simplified model for biomass combustion was developed. ► Porous zone conditions are used in the bed. ► Model is fully integrated in a commercial CFD code to simulate a small scale pellet boiler. ► Pollutant emissions are well predicted. ► Simulation provides extensive information about the behaviour of the boiler. - Abstract: This paper presents a computational fluid dynamic simulation of a domestic pellet boiler. Combustion of the solid fuel in the burner is an important issue when discussing the simulation of this type of system. A simplified method based on a thermal balance was developed in this work to introduce the effects provoked by pellet combustion in the boiler simulation. The model predictions were compared with the experimental measurements, and a good agreement was found. The results of the boiler analysis show that the position of the water tubes, the distribution of the air inlets and the air infiltrations are the key factors leading to the high emission levels present in this type of system.

  1. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  2. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  3. Qualitative Assessment of a 3D Simulation Program: Faculty, Students, and Bio-Organic Reaction Animations

    Science.gov (United States)

    Günersel, Adalet B.; Fleming, Steven A.

    2013-01-01

    Research shows that computer-based simulations and animations are especially helpful in fields such as chemistry where concepts are abstract and cannot be directly observed. Bio-Organic Reaction Animations (BioORA) is a freely available 3D visualization software program developed to help students understand the chemistry of biomolecular events.…

  4. Native fluorescence detection of biomolecular and pharmaceutical compounds in capillary electrophoresis: detector designs, performance and applications: A review

    NARCIS (Netherlands)

    de Kort, B.J.; de Jong, G.J.; Somsen, G.W.

    2013-01-01

    This review treats the coupling of capillary electrophoresis (CE) with fluorescence detection (Flu) for the analysis of natively fluorescent biomolecular and pharmaceutical compounds. CE-Flu combines the excellent separation efficiency of CE with the high selectivity and sensitivity of Flu. In

  5. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  6. Multi-scale imaging and elastic simulation of carbonates

    Science.gov (United States)

    Faisal, Titly Farhana; Awedalkarim, Ahmed; Jouini, Mohamed Soufiane; Jouiad, Mustapha; Chevalier, Sylvie; Sassi, Mohamed

    2016-05-01

    for this current unresolved phase is important. In this work we take a multi-scale imaging approach by first extracting a smaller 0.5" core and scanning at approx 13 µm, then further extracting a 5mm diameter core scanned at 5 μm. From this last scale, region of interests (containing unresolved areas) are identified for scanning at higher resolutions using Focalised Ion Beam (FIB/SEM) scanning technique reaching 50 nm resolution. Numerical simulation is run on such a small unresolved section to obtain a better estimate of the effective moduli which is then used as input for simulations performed using CT-images. Results are compared with expeirmental acoustic test moduli obtained also at two scales: 1.5" and 0.5" diameter cores.

  7. Numerical simulation of small scale soft impact tests

    International Nuclear Information System (INIS)

    Varpasuo, Pentti

    2008-01-01

    This paper describes the small scale soft missile impact tests. The purpose of the test program is to provide data for the calibration of the numerical simulation models for impact simulation. In the experiments, both dry and fluid filled missiles are used. The tests with fluid filled missiles investigate the release speed and the droplet size of the fluid release. This data is important in quantifying the fire hazard of flammable liquid after the release. The spray release velocity and droplet size are also input data for analytical and numerical simulation of the liquid spread in the impact. The behaviour of the impact target is the second investigative goal of the test program. The response of reinforced and pre-stressed concrete walls is studied with the aid of displacement and strain monitoring. (authors)

  8. Tibialis anterior muscle needle biopsy and sensitive biomolecular methods: a useful tool in myotonic dystrophy type 1

    Directory of Open Access Journals (Sweden)

    S. Iachettini

    2015-10-01

    Full Text Available Myotonic dystrophy type 1 (DM1 is a neuromuscular disorder caused by a CTG repeat expansion in 3’UTR of DMPK gene. This mutation causes accumulation of toxic RNA in nuclear foci leading to splicing misregulation of specific genes. In view of future clinical trials with antisense oligonucleotides in DM1 patients, it is important to set up sensitive and minimally-invasive tools to monitor the efficacy of treatments on skeletal muscle. A tibialis anterior (TA muscle sample of about 60 mg was obtained from 5 DM1 patients and 5 healthy subjects through a needle biopsy. A fragment of about 40 mg was used for histological examination and a fragment of about 20 mg was used for biomolecular analysis. The TA fragments obtained with the minimally-invasive needle biopsy technique is enough to perform all the histopathological and biomolecular evaluations useful to monitor a clinical trial on DM1 patients.

  9. Large-scale event extraction from literature with multi-level gene normalization.

    Directory of Open Access Journals (Sweden)

    Sofie Van Landeghem

    Full Text Available Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/. Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from

  10. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  11. Hybrid organic semiconductor lasers for bio-molecular sensing.

    Science.gov (United States)

    Haughey, Anne-Marie; Foucher, Caroline; Guilhabert, Benoit; Kanibolotsky, Alexander L; Skabara, Peter J; Burley, Glenn; Dawson, Martin D; Laurand, Nicolas

    2014-01-01

    Bio-functionalised luminescent organic semiconductors are attractive for biophotonics because they can act as efficient laser materials while simultaneously interacting with molecules. In this paper, we present and discuss a laser biosensor platform that utilises a gain layer made of such an organic semiconductor material. The simple structure of the sensor and its operation principle are described. Nanolayer detection is shown experimentally and analysed theoretically in order to assess the potential and the limits of the biosensor. The advantage conferred by the organic semiconductor is explained, and comparisons to laser sensors using alternative dye-doped materials are made. Specific biomolecular sensing is demonstrated, and routes to functionalisation with nucleic acid probes, and future developments opened up by this achievement, are highlighted. Finally, attractive formats for sensing applications are mentioned, as well as colloidal quantum dots, which in the future could be used in conjunction with organic semiconductors.

  12. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  13. Kinetic turbulence simulations at extreme scale on leadership-class systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bei [Princeton Univ., Princeton, NJ (United States); Ethier, Stephane [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Tang, William [Princeton Univ., Princeton, NJ (United States); Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Williams, Timothy [Argonne National Lab. (ANL), Argonne, IL (United States); Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Madduri, Kamesh [The Pennsylvania State Univ., University Park, PA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCF and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).

  14. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  15. Adequacy of power-to-volume scaling philosophy to simulate natural circulation in Integral Test Facilities

    International Nuclear Information System (INIS)

    Nayak, A.K.; Vijayan, P.K.; Saha, D.; Venkat Raj, V.; Aritomi, Masanori

    1998-01-01

    Theoretical and experimental investigations were carried out to study the adequacy of power-to-volume scaling philosophy for the simulation of natural circulation and to establish the scaling philosophy applicable for the design of the Integral Test Facility (ITF-AHWR) for the Indian Advanced Heavy Water Reactor (AHWR). The results indicate that a reduction in the flow channel diameter of the scaled facility as required by the power-to-volume scaling philosophy may affect the simulation of natural circulation behaviour of the prototype plants. This is caused by the distortions due to the inability to simulate the frictional resistance of the scaled facility. Hence, it is recommended that the flow channel diameter of the scaled facility should be as close as possible to the prototype. This was verified by comparing the natural circulation behaviour of a prototype 220 MWe Indian PHWR and its scaled facility (FISBE-1) designed based on power-to-volume scaling philosophy. It is suggested from examinations using a mathematical model and a computer code that the FISBE-1 simulates the steady state and the general trend of transient natural circulation behaviour of the prototype reactor adequately. Finally the proposed scaling method was applied for the design of the ITF-AHWR. (author)

  16. Design rules for biomolecular adhesion: lessons from force measurements.

    Science.gov (United States)

    Leckband, Deborah

    2010-01-01

    Cell adhesion to matrix, other cells, or pathogens plays a pivotal role in many processes in biomolecular engineering. Early macroscopic methods of quantifying adhesion led to the development of quantitative models of cell adhesion and migration. The more recent use of sensitive probes to quantify the forces that alter or manipulate adhesion proteins has revealed much greater functional diversity than was apparent from population average measurements of cell adhesion. This review highlights theoretical and experimental methods that identified force-dependent molecular properties that are central to the biological activity of adhesion proteins. Experimental and theoretical methods emphasized in this review include the surface force apparatus, atomic force microscopy, and vesicle-based probes. Specific examples given illustrate how these tools have revealed unique properties of adhesion proteins and their structural origins.

  17. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  18. Using a million cell simulation of the cerebellum: network scaling and task generality.

    Science.gov (United States)

    Li, Wen-Ke; Hausknecht, Matthew J; Stone, Peter; Mauk, Michael D

    2013-11-01

    Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Stochastic simulations of the tetracycline operon

    Science.gov (United States)

    2011-01-01

    Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the interplay between its molecular

  20. Stochastic simulations of the tetracycline operon

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2011-01-01

    Full Text Available Abstract Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the

  1. Multi-scale simulation of droplet-droplet interactions and coalescence

    CSIR Research Space (South Africa)

    Musehane, Ndivhuwo M

    2016-10-01

    Full Text Available Conference on Computational and Applied Mechanics Potchefstroom 3–5 October 2016 Multi-scale simulation of droplet-droplet interactions and coalescence 1,2Ndivhuwo M. Musehane?, 1Oliver F. Oxtoby and 2Daya B. Reddy 1. Aeronautic Systems, Council... topology changes that result when droplets interact. This work endeavours to eliminate the need to use empirical correlations based on phenomenological models by developing a multi-scale model that predicts the outcome of a collision between droplets from...

  2. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  3. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  4. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  5. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  6. Atomic scale simulations of arsenic ion implantation and annealing in silicon

    International Nuclear Information System (INIS)

    Caturla, M.J.; Diaz de la Rubia, T.; Jaraiz, M.

    1995-01-01

    We present results of multiple-time-scale simulations of 5, 10 and 15 keV low temperature ion implantation of arsenic on silicon (100), followed by high temperature anneals. The simulations start with a molecular dynamics (MD) calculation of the primary state of damage after 10ps. The results are then coupled to a kinetic Monte Carlo (MC) simulation of bulk defect diffusion and clustering. Dose accumulation is achieved considering that at low temperatures the damage produced in the lattice is stable. After the desired dose is accumulated, the system is annealed at 800 degrees C for several seconds. The results provide information on the evolution for the damage microstructure over macroscopic length and time scales and affords direct comparison to experimental results. We discuss the database of inputs to the MC model and how it affects the diffusion process

  7. Verification of Gyrokinetic Particle of Turbulent Simulation of Device Size Scaling Transport

    Institute of Scientific and Technical Information of China (English)

    LIN Zhihong; S. ETHIER; T. S. HAHM; W. M. TANG

    2012-01-01

    Verification and historical perspective are presented on the gyrokinetic particle simulations that discovered the device size scaling of turbulent transport and indentified the geometry model as the source of the long-standing disagreement between gyrokinetic particle and continuum simulations.

  8. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  9. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  10. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  11. Contact area of rough spheres: Large scale simulations and simple scaling laws

    Energy Technology Data Exchange (ETDEWEB)

    Pastewka, Lars, E-mail: lars.pastewka@kit.edu [Institute for Applied Materials & MicroTribology Center muTC, Karlsruhe Institute of Technology, Engelbert-Arnold-Straße 4, 76131 Karlsruhe (Germany); Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218 (United States); Robbins, Mark O., E-mail: mr@pha.jhu.edu [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218 (United States)

    2016-05-30

    We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the whole range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.

  12. Contact area of rough spheres: Large scale simulations and simple scaling laws

    Science.gov (United States)

    Pastewka, Lars; Robbins, Mark O.

    2016-05-01

    We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the whole range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.

  13. A biomolecular recognition approach for the functionalization of cellulose with gold nanoparticles.

    Science.gov (United States)

    Almeida, A; Rosa, A M M; Azevedo, A M; Prazeres, D M F

    2017-09-01

    Materials with new and improved functionalities can be obtained by modifying cellulose with gold nanoparticles (AuNPs) via the in situ reduction of a gold precursor or the deposition or covalent immobilization of pre-synthesized AuNPs. Here, we present an alternative biomolecular recognition approach to functionalize cellulose with biotin-AuNPs that relies on a complex of 2 recognition elements: a ZZ-CBM3 fusion that combines a carbohydrate-binding module (CBM) with the ZZ fragment of the staphylococcal protein A and an anti-biotin antibody. Paper and cellulose microparticles with AuNPs immobilized via the ZZ-CBM3:anti-biotin IgG supramolecular complex displayed an intense red color, whereas essentially no color was detected when AuNPs were deposited over the unmodified materials. Scanning electron microscopy analysis revealed a homogeneous distribution of AuNPs when immobilized via ZZ-CBM3:anti-biotin IgG complexes and aggregation of AuNPs when deposited over paper, suggesting that color differences are due to interparticle plasmon coupling effects. The approach could be used to functionalize paper substrates and cellulose nanocrystals with AuNPs. More important, however, is the fact that the occurrence of a biomolecular recognition event between the CBM-immobilized antibody and its specific, AuNP-conjugated antigen is signaled by red color. This opens up the way for the development of simple and straightforward paper/cellulose-based tests where detection of a target analyte can be made by direct use of color signaling. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Multi-Subband Ensemble Monte Carlo simulations of scaled GAA MOSFETs

    Science.gov (United States)

    Donetti, L.; Sampedro, C.; Ruiz, F. G.; Godoy, A.; Gamiz, F.

    2018-05-01

    We developed a Multi-Subband Ensemble Monte Carlo simulator for non-planar devices, taking into account two-dimensional quantum confinement. It couples self-consistently the solution of the 3D Poisson equation, the 2D Schrödinger equation, and the 1D Boltzmann transport equation with the Ensemble Monte Carlo method. This simulator was employed to study MOS devices based on ultra-scaled Gate-All-Around Si nanowires with diameters in the range from 4 nm to 8 nm with gate length from 8 nm to 14 nm. We studied the output and transfer characteristics, interpreting the behavior in the sub-threshold region and in the ON state in terms of the spatial charge distribution and the mobility computed with the same simulator. We analyzed the results, highlighting the contribution of different valleys and subbands and the effect of the gate bias on the energy and velocity profiles. Finally the scaling behavior was studied, showing that only the devices with D = 4nm maintain a good control of the short channel effects down to the gate length of 8nm .

  15. Experiment-scale molecular simulation study of liquid crystal thin films

    Science.gov (United States)

    Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael

    2014-03-01

    Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.

  16. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  17. Atomistic simulations of graphite etching at realistic time scales.

    Science.gov (United States)

    Aussems, D U B; Bal, K M; Morgan, T W; van de Sanden, M C M; Neyts, E C

    2017-10-01

    Hydrogen-graphite interactions are relevant to a wide variety of applications, ranging from astrophysics to fusion devices and nano-electronics. In order to shed light on these interactions, atomistic simulation using Molecular Dynamics (MD) has been shown to be an invaluable tool. It suffers, however, from severe time-scale limitations. In this work we apply the recently developed Collective Variable-Driven Hyperdynamics (CVHD) method to hydrogen etching of graphite for varying inter-impact times up to a realistic value of 1 ms, which corresponds to a flux of ∼10 20 m -2 s -1 . The results show that the erosion yield, hydrogen surface coverage and species distribution are significantly affected by the time between impacts. This can be explained by the higher probability of C-C bond breaking due to the prolonged exposure to thermal stress and the subsequent transition from ion- to thermal-induced etching. This latter regime of thermal-induced etching - chemical erosion - is here accessed for the first time using atomistic simulations. In conclusion, this study demonstrates that accounting for long time-scales significantly affects ion bombardment simulations and should not be neglected in a wide range of conditions, in contrast to what is typically assumed.

  18. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  19. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  20. Direct Numerical Simulation of Low Capillary Number Pore Scale Flows

    Science.gov (United States)

    Esmaeilzadeh, S.; Soulaine, C.; Tchelepi, H.

    2017-12-01

    The arrangement of void spaces and the granular structure of a porous medium determines multiple macroscopic properties of the rock such as porosity, capillary pressure, and relative permeability. Therefore, it is important to study the microscopic structure of the reservoir pores and understand the dynamics of fluid displacements through them. One approach for doing this, is direct numerical simulation of pore-scale flow that requires a robust numerical tool for prediction of fluid dynamics and a detailed understanding of the physical processes occurring at the pore-scale. In pore scale flows with a low capillary number, Eulerian multiphase methods are well-known to produce additional vorticity close to the interface. This is mainly due to discretization errors which lead to an imbalance of capillary pressure and surface tension forces that causes unphysical spurious currents. At the pore scale, these spurious currents can become significantly stronger than the average velocity in the phases, and lead to unphysical displacement of the interface. In this work, we first investigate the capability of the algebraic Volume of Fluid (VOF) method in OpenFOAM for low capillary number pore scale flow simulations. Afterward, we compare VOF results with a Coupled Level-Set Volume of Fluid (CLSVOF) method and Iso-Advector method. It has been shown that the former one reduces the VOF's unphysical spurious currents in some cases, and both are known to capture interfaces sharper than VOF. As the conclusion, we will investigate that whether the use of CLSVOF or Iso-Advector will lead to less spurious velocities and more accurate results for capillary driven pore-scale multiphase flows or not. Keywords: Pore-scale multiphase flow, Capillary driven flows, Spurious currents, OpenFOAM

  1. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  2. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    Science.gov (United States)

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  3. Bridging the scales in atmospheric composition simulations using a nudging technique

    Science.gov (United States)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean

  4. Scaling of mesoscale simulations of polymer melts with the bare friction coefficient

    NARCIS (Netherlands)

    Kindt, P.; Kindt, P.; Briels, Willem J.

    2005-01-01

    Both the Rouse and reptation model predict that the dynamics of a polymer melt scale inversely proportional with the Langevin friction coefficient (E). Mesoscale Brownian dynamics simulations of polyethylene validate these scaling predictions, providing the reptational friction (E)R=(E)+(E)C is

  5. Investigation of the Human Disease Osteogenesis Imperfecta: A Research-Based Introduction to Concepts and Skills in Biomolecular Analysis

    Science.gov (United States)

    Mate, Karen; Sim, Alistair; Weidenhofer, Judith; Milward, Liz; Scott, Judith

    2013-01-01

    A blended approach encompassing problem-based learning (PBL) and structured inquiry was used in this laboratory exercise based on the congenital disease Osteogenesis imperfecta (OI), to introduce commonly used techniques in biomolecular analysis within a clinical context. During a series of PBL sessions students were presented with several…

  6. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    Science.gov (United States)

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  7. Molecular Dynamics Simulation Study of Parallel Telomeric DNA Quadruplexes at Different Ionic Strengths: Evaluation of Water and Ion Models

    Czech Academy of Sciences Publication Activity Database

    Rebic, M.; Laaksonen, A.; Šponer, Jiří; Uličný, J.; Mocci, F.

    2016-01-01

    Roč. 120, č. 30 (2016), s. 7380-7391 ISSN 1520-6106 R&D Projects: GA ČR(CZ) GA16-13721S Institutional support: RVO:68081707 Keywords : amber force-field * nucleic-acids * biomolecular simulations Subject RIV: BO - Biophysics OBOR OECD: Physical chemistry Impact factor: 3.177, year: 2016

  8. Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang

    2016-01-01

    This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...

  9. Scale-up and optimization of biohydrogen production reactor from laboratory-scale to industrial-scale on the basis of computational fluid dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)

    2010-10-15

    The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)

  10. Simulation and scaling analysis of a spherical particle-laden blast wave

    Science.gov (United States)

    Ling, Y.; Balachandar, S.

    2018-05-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  11. Simulation and scaling analysis of a spherical particle-laden blast wave

    Science.gov (United States)

    Ling, Y.; Balachandar, S.

    2018-02-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  12. Global-Scale Hydrology: Simple Characterization of Complex Simulation

    Science.gov (United States)

    Koster, Randal D.

    1999-01-01

    Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.

  13. Parity Violation in Chiral Molecules: From Theory towards Spectroscopic Experiment and the Evolution of Biomolecular Homochirality

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The observation of biomolecular homochirality can be considered as a quasi-fossil of the evolution of life [1], the interpretation of which has been an open question for more than a century, with numerous related hypotheses, but no definitive answers. We shall briefly discuss the current status and the relation to the other two questions. The discovery of parity violation led to important developm...

  14. Large-scale atomistic simulations of nanostructured materials based on divide-and-conquer density functional theory

    Directory of Open Access Journals (Sweden)

    Vashishta P.

    2011-05-01

    Full Text Available A linear-scaling algorithm based on a divide-and-conquer (DC scheme is designed to perform large-scale molecular-dynamics simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT. This scheme is applied to the thermite reaction at an Al/Fe2O3 interface. It is found that mass diffusion and reaction rate at the interface are enhanced by a concerted metal-oxygen flip mechanism. Preliminary simulations are carried out for an aluminum particle in water based on the conventional DFT, as a target system for large-scale DC-DFT simulations. A pair of Lewis acid and base sites on the aluminum surface preferentially catalyzes hydrogen production in a low activation-barrier mechanism found in the simulations

  15. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  16. A small-scale experimental reactor combined with a simulator for training purposes

    International Nuclear Information System (INIS)

    Destot, M.; Hagendorf, M.; Vanhumbeeck, D.; Lecocq-Bernard, J.

    1981-01-01

    The authors discuss how a small-scale reactor combined to a training simulator can be a valuable aid in all forms of training. They describe the CEN-based SILOETTE reactor in Grenoble and its combined simulator. They also take a look at prospects for the future of the system in the light of experience acquired with the ARIANE reactor and the trends for the development of simulators for training purposes [fr

  17. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Hoshi, T [Department of Applied Mathematics and Physics, Tottori University, Tottori 680-8550 (Japan); Fujiwara, T [Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST-JST) (Japan)

    2009-02-11

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  18. Screening wells by multi-scale grids for multi-stage Markov Chain Monte Carlo simulation

    DEFF Research Database (Denmark)

    Akbari, Hani; Engsig-Karup, Allan Peter

    2018-01-01

    /production wells, aiming at accurate breakthrough capturing as well as above mentioned efficiency goals. However this short time simulation needs fine-scale structure of the geological model around wells and running a fine-scale model is not as cheap as necessary for screening steps. On the other hand applying...... it on a coarse-scale model declines important data around wells and causes inaccurate results, particularly accurate breakthrough capturing which is important for prediction applications. Therefore we propose a multi-scale grid which preserves the fine-scale model around wells (as well as high permeable regions...... and fractures) and coarsens rest of the field and keeps efficiency and accuracy for the screening well stage and coarse-scale simulation, as well. A discrete wavelet transform is used as a powerful tool to generate the desired unstructured multi-scale grid efficiently. Finally an accepted proposal on coarse...

  19. Dynamical properties of fractal networks: Scaling, numerical simulations, and physical realizations

    International Nuclear Information System (INIS)

    Nakayama, T.; Yakubo, K.; Orbach, R.L.

    1994-01-01

    This article describes the advances that have been made over the past ten years on the problem of fracton excitations in fractal structures. The relevant systems to this subject are so numerous that focus is limited to a specific structure, the percolating network. Recent progress has followed three directions: scaling, numerical simulations, and experiment. In a happy coincidence, large-scale computations, especially those involving array processors, have become possible in recent years. Experimental techniques such as light- and neutron-scattering experiments have also been developed. Together, they form the basis for a review article useful as a guide to understanding these developments and for charting future research directions. In addition, new numerical simulation results for the dynamical properties of diluted antiferromagnets are presented and interpreted in terms of scaling arguments. The authors hope this article will bring the major advances and future issues facing this field into clearer focus, and will stimulate further research on the dynamical properties of random systems

  20. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hoang Viet, Man [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202 (United States); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS, Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France); Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS, Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France)

    2015-07-14

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  1. Exploring Biomolecular Interactions Through Single-Molecule Force Spectroscopy and Computational Simulation

    OpenAIRE

    Yang, Darren

    2016-01-01

    Molecular interactions between cellular components such as proteins and nucleic acids govern the fundamental processes of living systems. Technological advancements in the past decade have allowed the characterization of these molecular interactions at the single-molecule level with high temporal and spatial resolution. Simultaneously, progress in computer simulation has enabled theoretical research at the atomistic level, assisting in the interpretation of experimental results. This thesi...

  2. Simple scaling for faster tracking simulation in accelerator multiparticle dynamics

    International Nuclear Information System (INIS)

    MacLachlan, J.A.

    2001-01-01

    Macroparticle tracking is a direct and attractive approach to following the evolution of a phase space distribution. When the particles interact through short range wake fields or when inter-particle force is included, calculations of this kind require a large number of macroparticles. It is possible to reduce both the number of macroparticles required and the number of tracking steps per unit simulated time by employing a simple scaling which can be inferred directly from the single-particle equations of motion. In many cases of practical importance the speed of calculation improves with the fourth power of the scaling constant. Scaling has been implemented in an existing longitudinal tracking code; early experience supports the concept and promises major time savings. Limitations on the scaling are discussed

  3. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  4. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  5. Concept of scaled test facility for simulating the PWR thermalhydraulic behaviour

    International Nuclear Information System (INIS)

    Silva Filho, E.

    1990-01-01

    This work deals with the design of a scaled test facility of a typical pressurized water reactor plant, to simulation of small break Loss-of-Coolant Accident. The computer code RELAP 5/ MOD1 has been utilized to simulate the accident and to compare the test facility behaviour with the reactor plant one. The results demonstrate similar thermal-hydraulic behaviours of the two sistema. (author)

  6. Validation of Simulation Model for Full Scale Wave Simulator and Discrete Fuild Power PTO System

    DEFF Research Database (Denmark)

    Hansen, Anders Hedegaard; Pedersen, Henrik C.; Hansen, Rico Hjerm

    2014-01-01

    In controller development for large scale machinery a good simulation model may serve as a time and money saving factor as well as a safety precaution. Having good models enables the developer to design and test control strategies in a safe and possibly less time consuming environment. For applic...

  7. Large Scale Monte Carlo Simulation of Neutrino Interactions Using the Open Science Grid and Commercial Clouds

    International Nuclear Information System (INIS)

    Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.

    2015-01-01

    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)

  8. Large eddy simulation of new subgrid scale model for three-dimensional bundle flows

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    2004-01-01

    Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)

  9. Challenges for Super-Resolution Localization Microscopy and Biomolecular Fluorescent Nano-Probing in Cancer Research

    Science.gov (United States)

    Ilić, Nataša; Pilarczyk, Götz; Lee, Jin-Ho; Logeswaran, Abiramy; Borroni, Aurora Paola; Krufczik, Matthias; Theda, Franziska; Waltrich, Nadine; Bestvater, Felix; Hildenbrand, Georg; Cremer, Christoph; Blank, Michael

    2017-01-01

    Understanding molecular interactions and regulatory mechanisms in tumor initiation, progression, and treatment response are key requirements towards advanced cancer diagnosis and novel treatment procedures in personalized medicine. Beyond decoding the gene expression, malfunctioning and cancer-related epigenetic pathways, investigations of the spatial receptor arrangements in membranes and genome organization in cell nuclei, on the nano-scale, contribute to elucidating complex molecular mechanisms in cells and tissues. By these means, the correlation between cell function and spatial organization of molecules or molecular complexes can be studied, with respect to carcinogenesis, tumor sensitivity or tumor resistance to anticancer therapies, like radiation or antibody treatment. Here, we present several new applications for bio-molecular nano-probes and super-resolution, laser fluorescence localization microscopy and their potential in life sciences, especially in biomedical and cancer research. By means of a tool-box of fluorescent antibodies, green fluorescent protein (GFP) tagging, or specific oligonucleotides, we present tumor relevant re-arrangements of Erb-receptors in membranes, spatial organization of Smad specific ubiquitin protein ligase 2 (Smurf2) in the cytosol, tumor cell characteristic heterochromatin organization, and molecular re-arrangements induced by radiation or antibody treatment. The main purpose of this article is to demonstrate how nano-scaled distance measurements between bio-molecules, tagged by appropriate nano-probes, can be applied to elucidate structures and conformations of molecular complexes which are characteristic of tumorigenesis and treatment responses. These applications open new avenues towards a better interpretation of the spatial organization and treatment responses of functionally relevant molecules, at the single cell level, in normal and cancer cells, offering new potentials for individualized medicine. PMID:28956810

  10. Challenges for Super-Resolution Localization Microscopy and Biomolecular Fluorescent Nano-Probing in Cancer Research

    Directory of Open Access Journals (Sweden)

    Michael Hausmann

    2017-09-01

    Full Text Available Understanding molecular interactions and regulatory mechanisms in tumor initiation, progression, and treatment response are key requirements towards advanced cancer diagnosis and novel treatment procedures in personalized medicine. Beyond decoding the gene expression, malfunctioning and cancer-related epigenetic pathways, investigations of the spatial receptor arrangements in membranes and genome organization in cell nuclei, on the nano-scale, contribute to elucidating complex molecular mechanisms in cells and tissues. By these means, the correlation between cell function and spatial organization of molecules or molecular complexes can be studied, with respect to carcinogenesis, tumor sensitivity or tumor resistance to anticancer therapies, like radiation or antibody treatment. Here, we present several new applications for bio-molecular nano-probes and super-resolution, laser fluorescence localization microscopy and their potential in life sciences, especially in biomedical and cancer research. By means of a tool-box of fluorescent antibodies, green fluorescent protein (GFP tagging, or specific oligonucleotides, we present tumor relevant re-arrangements of Erb-receptors in membranes, spatial organization of Smad specific ubiquitin protein ligase 2 (Smurf2 in the cytosol, tumor cell characteristic heterochromatin organization, and molecular re-arrangements induced by radiation or antibody treatment. The main purpose of this article is to demonstrate how nano-scaled distance measurements between bio-molecules, tagged by appropriate nano-probes, can be applied to elucidate structures and conformations of molecular complexes which are characteristic of tumorigenesis and treatment responses. These applications open new avenues towards a better interpretation of the spatial organization and treatment responses of functionally relevant molecules, at the single cell level, in normal and cancer cells, offering new potentials for individualized medicine.

  11. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n

    2010-01-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one

  12. Foreword [IJEGMBE 2015: India-Japan expert group meeting on biomolecular electronics and organic nanotechnology for environment preservation, Fukuoka (Japan), 23-26 December 2015

    International Nuclear Information System (INIS)

    2016-01-01

    There is increased interest in organic nanotechnology and biomolecular electronics for environmental preservation, and in their anticipated impact on the economics of both the developing and the developed world. Keeping this in mind, the Department of Biological Functions, Graduate School of Life Sciences and Systems Engineering, Kyushu Institute of Technology (KIT), Kitakyushu, Japan, and the Department of Science and Technology Centre on Biomolecular Electronics (DSTCBE), National Physical Laboratory (NPL) jointly organized the India-Japan Workshop on Biomolecular Electronics and Organic Nanotechnology for Environmental Preservation (IJWBME 2009) at NPL, New Delhi from 17 th - 19 th December 2009, IJWBME 2011 at EGRET Himeji, Himeji, from 7 th - 10 th December, Japan, and IJWBME 2013 at Delhi Technological University, New Delhi, from 13 th - 15 th December. The India-Japan Expert Group Meeting on Biomolecular Electronics and Organic Nanotechnology for Environment Preservation (IJEGMBE) will be held from 22 th – 25 th , December, 2015, at Nakamura Centenary Memorial Hall, Kyushu Institute of Technology, Kitakyushu, Japan in association with Delhi Technological University, Delhi, India. Recent years have seen rapid growth in the area of Biomolecular Electronics involving the association and expertise of physicists, biologists, chemists, electronics engineers and information technologists. There is increasing interest in the development of nanotechnology and biomolecular electronic devices for the preservation of our precious environment. In this context, the world of the electronics, which developed on Si semiconductors, is going to change drastically. A paradigm shift towards organic or printed electronics is more likely in the future. The field of organic electronics promises exciting new technologies based on inexpensive and mechanically flexible electronic devices, and is now starting to see commercial success. On the sidelines of this increasingly well

  13. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  14. A real scale simulator for high frequency LEMP

    Science.gov (United States)

    Gauthier, D.; Serafin, D.

    1991-01-01

    The real scale simulator is described which was designed by the Centre d'Etudes de Gramat (CEG) to study the coupling of fast rise time Lightning Electromagnetic pulse in a fighter aircraft. The system capability of generating the right electromagnetic environment was studied using a Finite Difference Time Domain (FDTD) computer program. First, data of inside stresses are shown. Then, a time domain and a frequency domain approach is exposed and compared.

  15. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    KAUST Repository

    VERMA, MAHENDRA K

    2013-09-21

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good ‘weak’ and ‘strong’ scaling for Tarang on these systems.

  16. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    KAUST Repository

    VERMA, MAHENDRA K; CHATTERJEE, ANANDO; REDDY, K SANDEEP; YADAV, RAKESH K; PAUL, SUPRIYO; CHANDRA, MANI; Samtaney, Ravi

    2013-01-01

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good ‘weak’ and ‘strong’ scaling for Tarang on these systems.

  17. PREFACE: 1st Nano-IBCT Conference 2011 - Radiation Damage of Biomolecular Systems: Nanoscale Insights into Ion Beam Cancer Therapy

    Science.gov (United States)

    Huber, Bernd A.; Malot, Christiane; Domaracka, Alicja; Solov'yov, Andrey V.

    2012-07-01

    The 1st Nano-IBCT Conference entitled 'Radiation Damage in Biomolecular Systems: Nanoscale Insights into Ion Beam Cancer Therapy' was held in Caen, France, in October 2011. The Meeting was organised in the framework of the COST Action MP1002 (Nano-IBCT) which was launched in December 2010 (http://fias.uni-frankfurt.de/nano-ibct). This action aims to promote the understanding of mechanisms and processes underlying the radiation damage of biomolecular systems at the molecular and nanoscopic level and to use the findings to improve the strategy of Ion Beam Cancer Therapy. In the hope of achieving this, participants from different disciplines were invited to represent the fields of physics, biology, medicine and chemistry, and also included those from industry and the operators of hadron therapy centres. Ion beam therapy offers the possibility of excellent dose localization for treatment of malignant tumours, minimizing radiation damage in normal healthy tissue, while maximizing cell killing within the tumour. Several ion beam cancer therapy clinical centres are now operating in Europe and elsewhere. However, the full potential of such therapy can only be exploited by better understanding the physical, chemical and biological mechanisms that lead to cell death under ion irradiation. Considering a range of spatio-temporal scales, the proposed action therefore aims to combine the unique experimental and theoretical expertise available within Europe to acquire greater insight at the nanoscopic and molecular level into radiation damage induced by ion impact. Success in this endeavour will be both an important scientific breakthrough and give great impetus to the practical improvement of this innovative therapeutic technique. Ion therapy potentially provides an important advance in cancer therapy and the COST action MP1002 will be very significant in ensuring Europe's leadership in this field, providing the scientific background, required data and mechanistic insight which

  18. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  19. Understanding bulk behavior of particulate materials from particle scale simulations

    Science.gov (United States)

    Deng, Xiaoliang

    Particulate materials play an increasingly significant role in various industries, such as pharmaceutical manufacturing, food, mining, and civil engineering. The objective of this research is to better understand bulk behaviors of particulate materials from particle scale simulations. Packing properties of assembly of particles are investigated first, focusing on the effects of particle size, surface energy, and aspect ratio on the coordination number, porosity, and packing structures. The simulation results show that particle sizes, surface energy, and aspect ratio all influence the porosity of packing to various degrees. The heterogeneous force networks within particle assembly under external compressive loading are investigated as well. The results show that coarse-coarse contacts dominate the strong network and coarse-fine contacts dominate the total network. Next, DEM models are developed to simulate the particle dynamics inside a conical screen mill (comil) and magnetically assisted impaction mixer (MAIM), both are important particle processing devices. For comil, the mean residence time (MRT), spatial distribution of particles, along with the collision dynamics between particles as well as particle and vessel geometries are examined as a function of the various operating parameters such as impeller speed, screen hole size, open area, and feed rate. The simulation results can help better understand dry coating experimental results using comil. For MAIM system, the magnetic force is incorporated into the contact model, allowing to describe the interactions between magnets. The simulation results reveal the connections between homogeneity of mixture and particle scale variables such as size of magnets and surface energy of non-magnets. In particular, at the fixed mass ratio of magnets to non-magnets and surface energy the smaller magnets lead to better homogeneity of mixing, which is in good agreement with previously published experimental results. Last but not

  20. Huge-scale molecular dynamics simulation of multibubble nuclei

    KAUST Repository

    Watanabe, Hiroshi

    2013-12-01

    We have developed molecular dynamics codes for a short-range interaction potential that adopt both the flat-MPI and MPI/OpenMP hybrid parallelizations on the basis of a full domain decomposition strategy. Benchmark simulations involving up to 38.4 billion Lennard-Jones particles were performed on Fujitsu PRIMEHPC FX10, consisting of 4800 SPARC64 IXfx 1.848 GHz processors, at the Information Technology Center of the University of Tokyo, and a performance of 193 teraflops was achieved, which corresponds to a 17.0% execution efficiency. Cavitation processes were also simulated on PRIMEHPC FX10 and SGI Altix ICE 8400EX at the Institute of Solid State Physics of the University of Tokyo, which involved 1.45 billion and 22.9 million particles, respectively. Ostwald-like ripening was observed after the multibubble nuclei. Our results demonstrate that direct simulations of multiscale phenomena involving phase transitions from the atomic scale are possible and that the molecular dynamics method is a promising method that can be applied to petascale computers. © 2013 Elsevier B.V. All rights reserved.

  1. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    Science.gov (United States)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  2. Dose controlled low energy electron irradiator for biomolecular films.

    Science.gov (United States)

    Kumar, S V K; Tare, Satej T; Upalekar, Yogesh V; Tsering, Thupten

    2016-03-01

    We have developed a multi target, Low Energy Electron (LEE), precise dose controlled irradiator for biomolecular films. Up to seven samples can be irradiated one after another at any preset electron energy and dose under UHV conditions without venting the chamber. In addition, one more sample goes through all the steps except irradiation, which can be used as control for comparison with the irradiated samples. All the samples are protected against stray electron irradiation by biasing them at -20 V during the entire period, except during irradiation. Ethernet based communication electronics hardware, LEE beam control electronics and computer interface were developed in house. The user Graphical User Interface to control the irradiation and dose measurement was developed using National Instruments Lab Windows CVI. The working and reliability of the dose controlled irradiator has been fully tested over the electron energy range of 0.5 to 500 eV by studying LEE induced single strand breaks to ΦX174 RF1 dsDNA.

  3. Dose controlled low energy electron irradiator for biomolecular films

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, S. V. K., E-mail: svkk@tifr.res.in; Tare, Satej T.; Upalekar, Yogesh V.; Tsering, Thupten [Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400 005 (India)

    2016-03-15

    We have developed a multi target, Low Energy Electron (LEE), precise dose controlled irradiator for biomolecular films. Up to seven samples can be irradiated one after another at any preset electron energy and dose under UHV conditions without venting the chamber. In addition, one more sample goes through all the steps except irradiation, which can be used as control for comparison with the irradiated samples. All the samples are protected against stray electron irradiation by biasing them at −20 V during the entire period, except during irradiation. Ethernet based communication electronics hardware, LEE beam control electronics and computer interface were developed in house. The user Graphical User Interface to control the irradiation and dose measurement was developed using National Instruments Lab Windows CVI. The working and reliability of the dose controlled irradiator has been fully tested over the electron energy range of 0.5 to 500 eV by studying LEE induced single strand breaks to ΦX174 RF1 dsDNA.

  4. Multi-scale Modeling of Compressible Single-phase Flow in Porous Media using Molecular Simulation

    KAUST Repository

    Saad, Ahmed Mohamed

    2016-05-01

    In this study, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell-centered finite difference method with a non-uniform rectangular mesh were used to discretize the simulation domain and solve the governing equations. To speed up the MC simulations, we implemented a recently developed scheme that quickly generates MC Markov chains out of pre-computed ones, based on the reweighting and reconstruction algorithm. This method astonishingly reduces the required computational time by MC simulations from hours to seconds. In addition, the reweighting and reconstruction scheme, which was originally designed to work with the LJ potential model, is extended to work with a potential model that accounts for the molecular quadrupole moment of fluids with non-spherical molecules such as CO2. The potential model was used to simulate the thermodynamic equilibrium properties for single-phase and two-phase systems using the canonical ensemble and the Gibbs ensemble, respectively. Comparing the simulation results with the experimental data showed that the implemented model has an excellent fit outperforming the standard LJ model. To demonstrate the strength of the proposed coupling in terms of computational time efficiency and numerical accuracy in fluid properties, various numerical experiments covering different compressible single-phase flow scenarios were conducted. The novelty in the introduced scheme is in allowing an efficient coupling of the molecular scale and Darcy scale in reservoir simulators. This leads to an accurate description of the thermodynamic behavior of the simulated reservoir fluids; consequently enhancing the confidence in the flow predictions in porous media.

  5. pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data

    Directory of Open Access Journals (Sweden)

    Ardita Shkurti

    2016-01-01

    Full Text Available The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced written in Python.

  6. Power Take-Off Simulation for Scale Model Testing of Wave Energy Converters

    Directory of Open Access Journals (Sweden)

    Scott Beatty

    2017-07-01

    Full Text Available Small scale testing in controlled environments is a key stage in the development of potential wave energy conversion technology. Furthermore, it is well known that the physical design and operational quality of the power-take off (PTO used on the small scale model can have vast effects on the tank testing results. Passive mechanical elements such as friction brakes and air dampers or oil filled dashpots are fraught with nonlinear behaviors such as static friction, temperature dependency, and backlash, the effects of which propagate into the wave energy converter (WEC power production data, causing very high uncertainty in the extrapolation of the tank test results to the meaningful full ocean scale. The lack of quality in PTO simulators is an identified barrier to the development of WECs worldwide. A solution to this problem is to use actively controlled actuators for PTO simulation on small scale model wave energy converters. This can be done using force (or torque-controlled feedback systems with suitable instrumentation, enabling the PTO to exert any desired time and/or state dependent reaction force. In this paper, two working experimental PTO simulators on two different wave energy converters are described. The first implementation is on a 1:25 scale self-reacting point absorber wave energy converter with optimum reactive control. The real-time control system, described in detail, is implemented in LabVIEW. The second implementation is on a 1:20 scale single body point absorber under model-predictive control, implemented with a real-time controller in MATLAB/Simulink. Details on the physical hardware, software, and feedback control methods, as well as results, are described for each PTO. Lastly, both sets of real-time control code are to be web-hosted, free for download, modified and used by other researchers and WEC developers.

  7. Atomic-scale simulations of the mechanical deformation of nanocrystalline metals

    DEFF Research Database (Denmark)

    Schiøtz, Jakob; Vegge, Tejs; Di Tolla, Francesco

    1999-01-01

    that the main deformation mode is sliding in the grain boundaries through a large number of uncorrelated events, where a few atoms (or a few tens of atoms) slide with respect to each other. Little dislocation activity is seen in the grain interiors. The localization of the deformation to the grain boundaries......Nanocrystalline metals, i.e., metals in which the grain size is in the nanometer range, have a range of technologically interesting properties including increased hardness and yield strength. We present atomic-scale simulations of the plastic behavior of nanocrystalline copper. The simulations show...

  8. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  9. Stable isotope applications in biomolecular structure and mechanisms. A meeting to bring together producers and users of stable-isotope-labeled compounds to assess current and future needs

    International Nuclear Information System (INIS)

    Trewhella, J.; Cross, T.A.; Unkefer, C.J.

    1994-12-01

    Knowledge of biomolecular structure is a prerequisite for understanding biomolecular function, and stable isotopes play an increasingly important role in structure determination of biological molecules. The first Conference on Stable Isotope Applications in Biomolecular Structure and Mechanisms was held in Santa Fe, New Mexico, March 27--31, 1994. More than 120 participants from 8 countries and 44 institutions reviewed significant developments, discussed the most promising applications for stable isotopes, and addressed future needs and challenges. Participants focused on applications of stable isotopes for studies of the structure and function of proteins, peptides, RNA, and DNA. Recent advances in NMR techniques neutron scattering, EPR, and vibrational spectroscopy were highlighted in addition to the production and synthesis of labeled compounds. This volume includes invited speaker and poster presentations as well as a set of reports from discussion panels that focused on the needs of the scientific community and the potential roles of private industry, the National Stable Isotope Resource, and the National High Magnetic Field Laboratory in serving those needs. This is the leading abstract. Individual papers are processed separately for the database

  10. Stable isotope applications in biomolecular structure and mechanisms. A meeting to bring together producers and users of stable-isotope-labeled compounds to assess current and future needs

    Energy Technology Data Exchange (ETDEWEB)

    Trewhella, J.; Cross, T.A.; Unkefer, C.J. [eds.

    1994-12-01

    Knowledge of biomolecular structure is a prerequisite for understanding biomolecular function, and stable isotopes play an increasingly important role in structure determination of biological molecules. The first Conference on Stable Isotope Applications in Biomolecular Structure and Mechanisms was held in Santa Fe, New Mexico, March 27--31, 1994. More than 120 participants from 8 countries and 44 institutions reviewed significant developments, discussed the most promising applications for stable isotopes, and addressed future needs and challenges. Participants focused on applications of stable isotopes for studies of the structure and function of proteins, peptides, RNA, and DNA. Recent advances in NMR techniques neutron scattering, EPR, and vibrational spectroscopy were highlighted in addition to the production and synthesis of labeled compounds. This volume includes invited speaker and poster presentations as well as a set of reports from discussion panels that focused on the needs of the scientific community and the potential roles of private industry, the National Stable Isotope Resource, and the National High Magnetic Field Laboratory in serving those needs. This is the leading abstract. Individual papers are processed separately for the database.

  11. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  12. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    Science.gov (United States)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and

  13. The use of gold nanoparticle aggregation for DNA computing and logic-based biomolecular detection

    International Nuclear Information System (INIS)

    Lee, In-Hee; Yang, Kyung-Ae; Zhang, Byoung-Tak; Lee, Ji-Hoon; Park, Ji-Yoon; Chai, Young Gyu; Lee, Jae-Hoon

    2008-01-01

    The use of DNA molecules as a physical computational material has attracted much interest, especially in the area of DNA computing. DNAs are also useful for logical control and analysis of biological systems if efficient visualization methods are available. Here we present a quick and simple visualization technique that displays the results of the DNA computing process based on a colorimetric change induced by gold nanoparticle aggregation, and we apply it to the logic-based detection of biomolecules. Our results demonstrate its effectiveness in both DNA-based logical computation and logic-based biomolecular detection

  14. Conformation of bovine submaxillary mucin layers on hydrophobic surface as studied by biomolecular probes

    DEFF Research Database (Denmark)

    Pakkanen, Kirsi I.; Madsen, Jan Busk; Lee, Seunghwan

    2015-01-01

    In the present study, the conformational changes of bovine submaxillary mucin (BSM) adsorbed on a hydrophobic surface (polystyrene (PS)) as a function of concentration in bulk solution (up to 2mg/mL) have been investigated with biomolecular probe-based approaches, including bicinchoninic acid (BCA),enzyme-linkedimmunosorbentassay(EIA...... solution. Adsorbed masses of BSM onto hydrophobic surface, as probe by BCA, showed a continuously increasing trend up to 2mg/mL. But, the signals from EIA and ELLA, which probe the concentration of available unglycosylatedC-terminals and the central glycosylated regions, respectively, showed complicated...

  15. Large-scale micromagnetics simulations with dipolar interaction using all-to-all communications

    Directory of Open Access Journals (Sweden)

    Hiroshi Tsukahara

    2016-05-01

    Full Text Available We implement on our micromagnetics simulator low-complexity parallel fast-Fourier-transform algorithms, which reduces the frequency of all-to-all communications from six to two times. Almost all the computation time of micromagnetics simulation is taken up by the calculation of the magnetostatic field which can be calculated using the fast Fourier transform method. The results show that the simulation time is decreased with good scalability, even if the micromagentics simulation is performed using 8192 physical cores. This high parallelization effect enables large-scale micromagentics simulation using over one billion to be performed. Because massively parallel computing is needed to simulate the magnetization dynamics of real permanent magnets composed of many micron-sized grains, it is expected that our simulator reveals how magnetization dynamics influences the coercivity of the permanent magnet.

  16. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    Science.gov (United States)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  17. Scale-adaptive simulation of a hot jet in cross flow

    Energy Technology Data Exchange (ETDEWEB)

    Duda, B M; Esteve, M-J [AIRBUS Operations S.A.S., Toulouse (France); Menter, F R; Hansen, T, E-mail: benjamin.duda@airbus.com [ANSYS Germany GmbH, Otterfing (Germany)

    2011-12-22

    The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.

  18. Scale-adaptive simulation of a hot jet in cross flow

    International Nuclear Information System (INIS)

    Duda, B M; Esteve, M-J; Menter, F R; Hansen, T

    2011-01-01

    The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.

  19. Cosmological Simulations with Scale-Free Initial Conditions. I. Adiabatic Hydrodynamics

    International Nuclear Information System (INIS)

    Owen, J.M.; Weinberg, D.H.; Evrard, A.E.; Hernquist, L.; Katz, N.

    1998-01-01

    We analyze hierarchical structure formation based on scale-free initial conditions in an Einstein endash de Sitter universe, including a baryonic component with Ω bary = 0.05. We present three independent, smoothed particle hydrodynamics (SPH) simulations, performed at two resolutions (32 3 and 64 3 dark matter and baryonic particles) and with two different SPH codes (TreeSPH and P3MSPH). Each simulation is based on identical initial conditions, which consist of Gaussian-distributed initial density fluctuations that have a power spectrum P(k) ∝ k -1 . The baryonic material is modeled as an ideal gas subject only to shock heating and adiabatic heating and cooling; radiative cooling and photoionization heating are not included. The evolution is expected to be self-similar in time, and under certain restrictions we identify the expected scalings for many properties of the distribution of collapsed objects in all three realizations. The distributions of dark matter masses, baryon masses, and mass- and emission-weighted temperatures scale quite reliably. However, the density estimates in the central regions of these structures are determined by the degree of numerical resolution. As a result, mean gas densities and Bremsstrahlung luminosities obey the expected scalings only when calculated within a limited dynamic range in density contrast. The temperatures and luminosities of the groups show tight correlations with the baryon masses, which we find can be well represented by power laws. The Press-Schechter (PS) approximation predicts the distribution of group dark matter and baryon masses fairly well, though it tends to overestimate the baryon masses. Combining the PS mass distribution with the measured relations for T(M) and L(M) predicts the temperature and luminosity distributions fairly accurately, though there are some discrepancies at high temperatures/luminosities. In general the three simulations agree well for the properties of resolved groups, where a group

  20. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    Science.gov (United States)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  1. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  2. Simulating space-time uncertainty in continental-scale gridded precipitation fields for agrometeorological modelling

    NARCIS (Netherlands)

    Wit, de A.J.W.; Bruin, de S.

    2006-01-01

    Previous analyses of the effects of uncertainty in precipitation fields on the output of EU Crop Growth Monitoring System (CGMS) demonstrated that the influence on simulated crop yield was limited at national scale, but considerable at local and regional scales. We aim to propagate uncertainty due

  3. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Directory of Open Access Journals (Sweden)

    C. M. R. Mateo

    2017-10-01

    Full Text Available Global-scale river models (GRMs are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC is assumed, simulation results deteriorate with finer spatial resolution; Nash–Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  4. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Science.gov (United States)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  5. Large-Eddy Simulation of Waked Turbines in a Scaled Wind Farm Facility

    Science.gov (United States)

    Wang, J.; McLean, D.; Campagnolo, F.; Yu, T.; Bottasso, C. L.

    2017-05-01

    The aim of this paper is to present the numerical simulation of waked scaled wind turbines operating in a boundary layer wind tunnel. The simulation uses a LES-lifting-line numerical model. An immersed boundary method in conjunction with an adequate wall model is used to represent the effects of both the wind turbine nacelle and tower, which are shown to have a considerable effect on the wake behavior. Multi-airfoil data calibrated at different Reynolds numbers are used to account for the lift and drag characteristics at the low and varying Reynolds conditions encountered in the experiments. The present study focuses on low turbulence inflow conditions and inflow non-uniformity due to wind tunnel characteristics, while higher turbulence conditions are considered in a separate study. The numerical model is validated by using experimental data obtained during test campaigns conducted with the scaled wind farm facility. The simulation and experimental results are compared in terms of power capture, rotor thrust, downstream velocity profiles and turbulence intensity.

  6. Exploring a multi-scale method for molecular simulation in continuum solvent model: Explicit simulation of continuum solvent as an incompressible fluid.

    Science.gov (United States)

    Xiao, Li; Luo, Ray

    2017-12-07

    We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.

  7. Large Scale Earth's Bow Shock with Northern IMF as Simulated by ...

    Indian Academy of Sciences (India)

    results with the available MHD simulations under same scaled solar wind. (SW) and (IMF) ... their effects in dissipating flow-energy, in heating matter, in accelerating particles to high, presumably ... such as hybrid models (Omidi et al. 2013 ...

  8. The mechanical design and simulation of a scaled H⁻ Penning ion source.

    Science.gov (United States)

    Rutter, T; Faircloth, D; Turner, D; Lawrie, S

    2016-02-01

    The existing ISIS Penning H(-) source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.

  9. The mechanical design and simulation of a scaled H- Penning ion source

    Science.gov (United States)

    Rutter, T.; Faircloth, D.; Turner, D.; Lawrie, S.

    2016-02-01

    The existing ISIS Penning H- source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.

  10. Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations

    Science.gov (United States)

    Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang

    2017-09-01

    Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.

  11. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  12. Spatial-Scale Characteristics of Precipitation Simulated by Regional Climate Models and the Implications for Hydrological Modeling

    DEFF Research Database (Denmark)

    Rasmussen, S.H.; Christensen, J. H.; Drews, Martin

    2012-01-01

    Precipitation simulated by regional climate models (RCMs) is generally biased with respect to observations, especially at the local scale of a few tens of kilometers. This study investigates how well two different RCMs are able to reproduce the spatial correlation patterns of observed summer...... length scales on the order of 130 km are found in both observed data and RCM simulations. When simulations and observations are aggregated to different grid sizes, the pattern correlation significantly decreases when the aggregation length is less than roughly 100 km. Furthermore, the intermodel standard......, reflecting larger predictive certainty of the RCMs at larger scales. The findings on aggregated grid scales are shown to be largely independent of the underlying RCMs grid resolutions but not of the overall size of RCM domain. With regard to hydrological modeling applications, these findings indicate...

  13. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  14. Biomolecular ions in superfluid helium nanodroplets

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Florez, Ana Isabel

    2016-07-01

    The function of a biological molecule is closely related to its structure. As a result, understanding and predicting biomolecular structure has become the focus of an extensive field of research. However, the investigation of molecular structure can be hampered by two main difficulties: the inherent complications that may arise from studying biological molecules in their native environment, and the potential congestion of the experimental results as a consequence of the large number of degrees of freedom present in these molecules. In this work, a new experimental setup has been developed and established in order to overcome the afore mentioned limitations combining structure-sensitive gas-phase methods with superfluid helium droplets. First, biological molecules are ionised and brought into the gas phase, often referred to as a clean-room environment, where the species of interest are isolated from their surroundings and, thus, intermolecular interactions are absent. The mass-to-charge selected biomolecules are then embedded inside clusters of superfluid helium with an equilibrium temperature of ∝0.37 K. As a result, the internal energy of the molecules is lowered, thereby reducing the number of populated quantum states. Finally, the local hydrogen bonding patterns of the molecules are investigated by probing specific vibrational modes using the Fritz Haber Institute's free electron laser as a source of infrared radiation. Although the structure of a wide variety of molecules has been studied making use of the sub-Kelvin environment provided by superfluid helium droplets, the suitability of this method for the investigation of biological molecular ions was still unclear. However, the experimental results presented in this thesis demonstrate the applicability of this experimental approach in order to study the structure of intact, large biomolecular ions and the first vibrational spectrum of the protonated pentapeptide leu-enkephalin embedded in helium

  15. Biomolecular ions in superfluid helium nanodroplets

    International Nuclear Information System (INIS)

    Gonzalez Florez, Ana Isabel

    2016-01-01

    The function of a biological molecule is closely related to its structure. As a result, understanding and predicting biomolecular structure has become the focus of an extensive field of research. However, the investigation of molecular structure can be hampered by two main difficulties: the inherent complications that may arise from studying biological molecules in their native environment, and the potential congestion of the experimental results as a consequence of the large number of degrees of freedom present in these molecules. In this work, a new experimental setup has been developed and established in order to overcome the afore mentioned limitations combining structure-sensitive gas-phase methods with superfluid helium droplets. First, biological molecules are ionised and brought into the gas phase, often referred to as a clean-room environment, where the species of interest are isolated from their surroundings and, thus, intermolecular interactions are absent. The mass-to-charge selected biomolecules are then embedded inside clusters of superfluid helium with an equilibrium temperature of ∝0.37 K. As a result, the internal energy of the molecules is lowered, thereby reducing the number of populated quantum states. Finally, the local hydrogen bonding patterns of the molecules are investigated by probing specific vibrational modes using the Fritz Haber Institute's free electron laser as a source of infrared radiation. Although the structure of a wide variety of molecules has been studied making use of the sub-Kelvin environment provided by superfluid helium droplets, the suitability of this method for the investigation of biological molecular ions was still unclear. However, the experimental results presented in this thesis demonstrate the applicability of this experimental approach in order to study the structure of intact, large biomolecular ions and the first vibrational spectrum of the protonated pentapeptide leu-enkephalin embedded in helium

  16. Anaerobic Digestion and Biogas Potential: Simulation of Lab and Industrial-Scale Processes

    Directory of Open Access Journals (Sweden)

    Ihsan Hamawand

    2015-01-01

    Full Text Available In this study, a simulation was carried out using BioWin 3.1 to test the capability of the software to predict the biogas potential for two different anaerobic systems. The two scenarios included: (1 a laboratory-scale batch reactor; and (2 an industrial-scale anaerobic continuous lagoon digester. The measured data related to the operating conditions, the reactor design parameters and the chemical properties of influent wastewater were entered into BioWin. A sensitivity analysis was carried out to identify the sensitivity of the most important default parameters in the software’s models. BioWin was then calibrated by matching the predicted data with measured data and used to simulate other parameters that were unmeasured or deemed uncertain. In addition, statistical analyses were carried out using evaluation indices, such as the coefficient of determination (R-squared, the correlation coefficient (r and its significance (p-value, the general standard deviation (SD and the Willmott index of agreement, to evaluate the agreement between the software prediction and the measured data. The results have shown that after calibration, BioWin can be used reliably to simulate both small-scale batch reactors and industrial-scale digesters with a mean absolute percentage error (MAPE of less than 10% and very good values of the indexes. Furthermore, by changing the default parameters in BioWin, which is a way of calibrating the models in the software, as well, this may provide information about the performance of the digester. Furthermore, the results of this study showed there may be an over estimation for biogas generated from industrial-scale digesters. More sophisticated analytical devices may be required for reliable measurements of biogas quality and quantity.

  17. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  18. Simulations of ecosystem hydrological processes using a unified multi-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaofan; Liu, Chongxuan; Fang, Yilin; Hinkle, Ross; Li, Hong-Yi; Bailey, Vanessa; Bond-Lamberty, Ben

    2015-01-01

    This paper presents a unified multi-scale model (UMSM) that we developed to simulate hydrological processes in an ecosystem containing both surface water and groundwater. The UMSM approach modifies the Navier–Stokes equation by adding a Darcy force term to formulate a single set of equations to describe fluid momentum and uses a generalized equation to describe fluid mass balance. The advantage of the approach is that the single set of the equations can describe hydrological processes in both surface water and groundwater where different models are traditionally required to simulate fluid flow. This feature of the UMSM significantly facilitates modelling of hydrological processes in ecosystems, especially at locations where soil/sediment may be frequently inundated and drained in response to precipitation, regional hydrological and climate changes. In this paper, the UMSM was benchmarked using WASH123D, a model commonly used for simulating coupled surface water and groundwater flow. Disney Wilderness Preserve (DWP) site at the Kissimmee, Florida, where active field monitoring and measurements are ongoing to understand hydrological and biogeochemical processes, was then used as an example to illustrate the UMSM modelling approach. The simulations results demonstrated that the DWP site is subject to the frequent changes in soil saturation, the geometry and volume of surface water bodies, and groundwater and surface water exchange. All the hydrological phenomena in surface water and groundwater components including inundation and draining, river bank flow, groundwater table change, soil saturation, hydrological interactions between groundwater and surface water, and the migration of surface water and groundwater interfaces can be simultaneously simulated using the UMSM. Overall, the UMSM offers a cross-scale approach that is particularly suitable to simulate coupled surface and ground water flow in ecosystems with strong surface water and groundwater interactions.

  19. Efficient graph-based dynamic load-balancing for parallel large-scale agent-based traffic simulation

    NARCIS (Netherlands)

    Xu, Y.; Cai, W.; Aydt, H.; Lees, M.; Tolk, A.; Diallo, S.Y.; Ryzhov, I.O.; Yilmaz, L.; Buckley, S.; Miller, J.A.

    2014-01-01

    One of the issues of parallelizing large-scale agent-based traffic simulations is partitioning and load-balancing. Traffic simulations are dynamic applications where the distribution of workload in the spatial domain constantly changes. Dynamic load-balancing at run-time has shown better efficiency

  20. A hydrogel-based versatile screening platform for specific biomolecular recognition in a well plate format.

    Science.gov (United States)

    Beer, Meike V; Rech, Claudia; Diederichs, Sylvia; Hahn, Kathrin; Bruellhoff, Kristina; Möller, Martin; Elling, Lothar; Groll, Jürgen

    2012-04-01

    Precise determination of biomolecular interactions in high throughput crucially depends on a surface coating technique that allows immobilization of a variety of interaction partners in a non-interacting environment. We present a one-step hydrogel coating system based on isocyanate functional six-arm poly(ethylene oxide)-based star polymers for commercially available 96-well microtiter plates that combines a straightforward and robust coating application with versatile bio-functionalization. This system generates resistance to unspecific protein adsorption and cell adhesion, as demonstrated with fluorescently labeled bovine serum albumin and primary human dermal fibroblasts (HDF), and high specificity for the assessment of biomolecular recognition processes when ligands are immobilized on this surface. One particular advantage is the wide range of biomolecules that can be immobilized and convert the per se inert coating into a specifically interacting surface. We here demonstrate the immobilization and quantification of a broad range of biochemically important ligands, such as peptide sequences GRGDS and GRGDSK-biotin, the broadly applicable coupler molecule biocytin, the protein fibronectin, and the carbohydrates N-acetylglucosamine and N-acetyllactosamine. A simplified protocol for an enzyme-linked immunosorbent assay was established for the detection and quantification of ligands on the coating surface. Cell adhesion on the peptide and protein-modified surfaces was assessed using HDF. All coatings were applied using a one-step preparation technique, including bioactivation, which makes the system suitable for high-throughput screening in a format that is compatible with the most routinely used testing systems.

  1. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  2. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  3. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  4. Fire spread simulation of a full scale cable tunnel

    International Nuclear Information System (INIS)

    Huhtanen, R.

    1999-11-01

    A fire simulation of a full scale tunnel was performed by using the commercial code EFFLUENT as the simulation platform. Estimation was made for fire spread on the stacked cable trays, possibility of fire spread to the cable trays on the opposite wall of the tunnel, detection time of smoke detectors in the smouldering phase and response of sprinkler heads in the flaming phase. According to the simulation, the rise of temperature in the smouldering phase is minimal, only of the order 1 deg C. The estimates of optical density of smoke show that normal smoke detectors should give an alarm within 2-4 minutes from the beginning of the smouldering phase, depending on the distance to the detector (in this case it was assumed that the thermal source connected to the smoke source was 50 W). The flow conditions at smoke detectors may be challenging, because the velocity magnitude is rather low at this phase. At 4 minutes the maximum velocity at the detectors is 0.12 m/s. During the flaming phase (beginning from 11 minutes) fire spreads on the stacked cable trays in an expected way, although the ignition criterion seems to perform poorly when ignition of new objects is considered. The Upper cable trays are forced to ignite by boundary condition definitions according to the experience found from ti full scale experiment and an earlier simulation. After 30 minutes the hot layer in the room becomes so hot that it speeds up the fire spread and the rate of heat release of burning objects. Further, the hot layer ignites the cable trays on the opposite wall of the tunnel after 45 minutes. It is estimated that the sprinkler heads would be activated at 20-22 minutes near the fire source and at 24-28 minutes little further from the fire source when fast sprinkler heads are used. The slow heads are activated between 26-32 minutes. (orig.)

  5. Evolução biomolecular homoquiral: a origem e a amplificação da quiralidade nas moléculas da vida Homochiral biomolecular evolution: the origin and the amplification of chirality in life molecules

    Directory of Open Access Journals (Sweden)

    José Augusto R. Rodrigues

    2010-01-01

    Full Text Available The fact that biologically relevant molecules exist only as one of the two enantiomers is a fascinating example of complete symmetry breaking of chirality and has long intrigued our curiosity. The origin of this selective chirality has remained a fundamental enigma with regard to the origin of life since the time of Pasteur, 160 years ago. The symmetry breaking processes, which include autocatalytic crystallization, asymmetric autocatalysis, spontaneous crystallization, adsorption and polymerization of amino acids on mineral surfaces, provide new insights into the origin of biomolecular homochirality.

  6. A detailed model for simulation of catchment scale subsurface hydrologic processes

    Science.gov (United States)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  7. An efficient non hydrostatic dynamical care far high-resolution simulations down to the urban scale

    International Nuclear Information System (INIS)

    Bonaventura, L.; Cesari, D.

    2005-01-01

    Numerical simulations of idealized stratified flows aver obstacles at different spatial scales demonstrate the very general applicability and the parallel efficiency of a new non hydrostatic dynamical care far simulation of mesoscale flows aver complex terrain

  8. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Jesse; Townson, Paul; Vanatta, Matt [EnergySolutions, Engineering and Technology Group, Richland, WA, 99354 (United States)

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution

  9. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  10. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  11. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  12. Potential environmental impacts associated with large-scale herbicide-tolerant GM oilseed rape crops

    Directory of Open Access Journals (Sweden)

    Fellous Marc

    2004-07-01

    Full Text Available The Biomolecular Engineering Commission considers that the knowledge acquired in the last three years has provided significant information in reply to the points raised in its review dated 16 February 2001. The Commission has studied the potential environmental impacts associated with large-scale herbicidetolerantGMoilseed rape crops, making a distinction between direct and indirect impacts. Direct impacts stem from the intrinsic properties of herbicide-tolerant GM oilseed rape crops whereas indirect impacts result from practices associated with the farming of these crops. The Commission considers that, in the absence of the use of the herbicide in question in and outside of farmed land, there is no direct environmental risk (development of invasive crops per se associated with the presence of a herbicide-tolerance gene in oilseed rape (or related species. Nevertheless, since the interest of these tolerant crops lies in the use of the herbicide in question, indirect effects, to varying extents, have been identified and must be taken into account: the use of the herbicide in question, applied to agricultural fields containing the herbicide-tolerant crop could lead to an increase in oilseed rape volunteer populations in crop rotations; the selective pressure exerted by non-specific herbicides (to which the crops have been rendered tolerant may be very high in cases of continuous and uncontrolled use of these herbicides, and may result in the persistence of rare events such as the reproduction of fertile interspecies hybrids; the change to the range of herbicides used should be conveyed by more effective weed control and, like any change in farming practices, induce indirect effects on the agri-ecosystem, particularly in terms of changes to weeds and the associated animal life. Accordingly, the Biomolecular Engineering Commission recommends a global approach in terms of the large-scale farming of herbicide-tolerant crops that: accounts for the

  13. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    NARCIS (Netherlands)

    Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike

  14. Proceedings of joint meeting of the 6th simulation science symposium and the NIFS collaboration research 'large scale computer simulation'

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-03-01

    Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)

  15. Convective aggregation in realistic convective-scale simulations

    Science.gov (United States)

    Holloway, Christopher E.

    2017-06-01

    To investigate the real-world relevance of idealized-model convective self-aggregation, five 15 day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibrium. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy shows that control runs have significant positive contributions to organization from radiation and negative contributions from surface fluxes and transport, similar to idealized runs once they become aggregated. Despite identical lateral boundary conditions for all experiments in each case, systematic differences in mean column water vapor (CWV), CWV distribution shape, and CWV autocorrelation length scale are found between the different sensitivity runs, particularly for those without interactive radiation, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations (although the organization of precipitation shows less sensitivity to interactive radiation). The magnitudes and signs of these systematic differences are consistent with a rough equilibrium between (1) equalization due to advection from the lateral boundaries and (2) disaggregation due to the absence of interactive radiation, implying disaggregation rates comparable to those in idealized runs with aggregated initial conditions and noninteractive radiation. This points to a plausible similarity in the way that radiation feedbacks maintain aggregated convection in both idealized simulations and the real world.Plain Language SummaryUnderstanding the processes that lead to the organization of tropical rainstorms is an important challenge for weather

  16. AIM for Allostery: Using the Ising Model to Understand Information Processing and Transmission in Allosteric Biomolecular Systems.

    Science.gov (United States)

    LeVine, Michael V; Weinstein, Harel

    2015-05-01

    In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular "action at a distance" is termed allostery . Although allostery is ubiquitous in biological regulation and signal transduction, its treatment in theoretical models has mostly eschewed quantitative descriptions involving the system's underlying structural components and their interactions. Here, we show how Ising models can be used to formulate an approach to allostery in a structural context of interactions between the constitutive components by building simple allosteric constructs we termed Allosteric Ising Models (AIMs). We introduce the use of AIMs in analytical and numerical calculations that relate thermodynamic descriptions of allostery to the structural context, and then show that many fundamental properties of allostery, such as the multiplicative property of parallel allosteric channels, are revealed from the analysis of such models. The power of exploring mechanistic structural models of allosteric function in more complex systems by using AIMs is demonstrated by building a model of allosteric signaling for an experimentally well-characterized asymmetric homodimer of the dopamine D2 receptor.

  17. A QM-MD simulation approach to the analysis of FRET processes in (bio)molecular systems. A case study: complexes of E. coli purine nucleoside phosphorylase and its mutants with formycin A.

    Science.gov (United States)

    Sobieraj, M; Krzyśko, K A; Jarmuła, A; Kalinowski, M W; Lesyng, B; Prokopowicz, M; Cieśla, J; Gojdź, A; Kierdaszuk, B

    2015-04-01

    Predicting FRET pathways in proteins using computer simulation techniques is very important for reliable interpretation of experimental data. A novel and relatively simple methodology has been developed and applied to purine nucleoside phosphorylase (PNP) complexed with a fluorescent ligand - formycin A (FA). FRET occurs between an excited Tyr residue (D*) and FA (A). This study aims to interpret experimental data that, among others, suggests the absence of FRET for the PNPF159A mutant in complex with FA, based on novel theoretical methodology. MD simulations for the protein molecule containing D*, and complexed with A, are carried out. Interactions of D* with its molecular environment are accounted by including changes of the ESP charges in S1, compared to S0, and computed at the SCF-CI level. FRET probability W F depends on the inverse six-power of the D*-A distance, R da . The orientational factor 0 < k(2) < 4 between D* and A is computed and included in the analysis. Finally W F is time-averaged over the MD trajectories resulting in its mean value. The red-shift of the tyrosinate anion emission and thus lack of spectral overlap integral and thermal energy dissipation are the reasons for the FRET absence in the studied mutants at pH 7 and above. The presence of the tyrosinate anion results in a competitive energy dissipation channel and red-shifted emission, thus in consequence in the absence of FRET. These studies also indicate an important role of the phenyl ring of Phe159 for FRET in the wild-type PNP, which does not exist in the Ala159 mutant, and for the effective association of PNP with FA. In a more general context, our observations point out very interesting and biologically important properties of the tyrosine residue in its excited state, which may undergo spontaneous deprotonation in the biomolecular systems, resulting further in unexpected physical and/or biological phenomena. Until now, this observation has not been widely discussed in the

  18. Anaerobic Digestion and Biogas Potential: Simulation of Lab and Industrial-Scale Processes

    OpenAIRE

    Ihsan Hamawand; Craig Baillie

    2015-01-01

    In this study, a simulation was carried out using BioWin 3.1 to test the capability of the software to predict the biogas potential for two different anaerobic systems. The two scenarios included: (1) a laboratory-scale batch reactor; and (2) an industrial-scale anaerobic continuous lagoon digester. The measured data related to the operating conditions, the reactor design parameters and the chemical properties of influent wastewater were entered into BioWin. A sensitivity analysis was carried...

  19. Simulating Nationwide Pandemics: Applying the Multi-scale Epidemiologic Simulation and Analysis System to Human Infectious Diseases

    Energy Technology Data Exchange (ETDEWEB)

    Dombroski, M; Melius, C; Edmunds, T; Banks, L E; Bates, T; Wheeler, R

    2008-09-24

    This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to human epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future

  20. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  1. Tests of peak flow scaling in simulated self-similar river networks

    Science.gov (United States)

    Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.

    2001-01-01

    The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.

  2. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  3. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  4. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  5. Establishment of DNS database in a turbulent channel flow by large-scale simulations

    OpenAIRE

    Abe, Hiroyuki; Kawamura, Hiroshi; 阿部 浩幸; 河村 洋

    2008-01-01

    In the present study, we establish statistical DNS (Direct Numerical Simulation) database in a turbulent channel flow with passive scalar transport at high Reynolds numbers and make the data available at our web site (http://murasun.me.noda.tus.ac.jp/turbulence/). The established database is reported together with the implementation of large-scale simulations, representative DNS results and results on turbulence model testing using the DNS data.

  6. Comparison of contact stresses of the test tyres used by the one third scale model mobile load simulator (MMLS3) and the full-scale test tyres of the Heavy Vehicle Simulator (HVS) - a summary

    CSIR Research Space (South Africa)

    De Beer, Morris

    2007-07-01

    Full Text Available This paper summarises the results of a study in which the maximum vertical contact stressess of the one third scale test tyres of the Model Mobile Load Simulator (MMLS3) were compared with those measured for three types of full-scale test tyres...

  7. The German VR Simulation Realism Scale--psychometric construction for virtual reality applications with virtual humans.

    Science.gov (United States)

    Poeschl, Sandra; Doering, Nicola

    2013-01-01

    Virtual training applications with high levels of immersion or fidelity (for example for social phobia treatment) produce high levels of presence and therefore belong to the most successful Virtual Reality developments. Whereas display and interaction fidelity (as sub-dimensions of immersion) and their influence on presence are well researched, realism of the displayed simulation depends on the specific application and is therefore difficult to measure. We propose to measure simulation realism by using a self-report questionnaire. The German VR Simulation Realism Scale for VR training applications was developed based on a translation of scene realism items from the Witmer-Singer-Presence Questionnaire. Items for realism of virtual humans (for example for social phobia training applications) were supplemented. A sample of N = 151 students rated simulation realism of a Fear of Public Speaking application. Four factors were derived by item- and principle component analysis (Varimax rotation), representing Scene Realism, Audience Behavior, Audience Appearance and Sound Realism. The scale developed can be used as a starting point for future research and measurement of simulation realism for applications including virtual humans.

  8. Challenges in analysing and visualizing large-scale molecular dynamics simulations: domain and defect formation in lung surfactant monolayers

    International Nuclear Information System (INIS)

    Mendez-Villuendas, E; Baoukina, S; Tieleman, D P

    2012-01-01

    Molecular dynamics simulations have rapidly grown in size and complexity, as computers have become more powerful and molecular dynamics software more efficient. Using coarse-grained models like MARTINI system sizes of the order of 50 nm × 50 nm × 50 nm can be simulated on commodity clusters on microsecond time scales. For simulations of biological membranes and monolayers mimicking lung surfactant this enables large-scale transformation and complex mixtures of lipids and proteins. Here we use a simulation of a monolayer with three phospholipid components, cholesterol, lung surfactant proteins, water, and ions on a ten microsecond time scale to illustrate some current challenges in analysis. In the simulation, phase separation occurs followed by formation of a bilayer fold in which lipids and lung surfactant protein form a highly curved structure in the aqueous phase. We use Voronoi analysis to obtain detailed physical properties of the different components and phases, and calculate local mean and Gaussian curvatures of the bilayer fold.

  9. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  10. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  11. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    Science.gov (United States)

    Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-11-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.

  12. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    International Nuclear Information System (INIS)

    Kurucz, Charles N.; Waite, Thomas D.; Otano, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-01-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1 ) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60 Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater

  13. Susceptibility of the MMPI-2-RF neurological complaints and cognitive complaints scales to over-reporting in simulated head injury.

    Science.gov (United States)

    Bolinger, Elizabeth; Reese, Caitlin; Suhr, Julie; Larrabee, Glenn J

    2014-02-01

    We examined the effect of simulated head injury on scores on the Neurological Complaints (NUC) and Cognitive Complaints (COG) scales of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF). Young adults with a history of mild head injury were randomly assigned to simulate head injury or give their best effort on a battery of neuropsychological tests, including the MMPI-2-RF. Simulators who also showed poor effort on performance validity tests (PVTs) were compared with controls who showed valid performance on PVTs. Results showed that both scales, but especially NUC, are elevated in individuals simulating head injury, with medium to large effect sizes. Although both scales were highly correlated with all MMPI-2-RF over-reporting validity scales, the relationship of Response Bias Scale to both NUC and COG was much stronger in the simulators than controls. Even accounting for over-reporting on the MMPI-2-RF, NUC was related to general somatic complaints regardless of group membership, whereas COG was related to both psychological distress and somatic complaints in the control group only. Neither scale was related to actual neuropsychological performance, regardless of group membership. Overall, results provide further evidence that self-reported cognitive symptoms can be due to many causes, not necessarily cognitive impairment, and can be exaggerated in a non-credible manner.

  14. Electron Debye scale Kelvin-Helmholtz instability: Electrostatic particle-in-cell simulations

    International Nuclear Information System (INIS)

    Lee, Sang-Yun; Lee, Ensang; Kim, Khan-Hyuk; Lee, Dong-Hun; Seon, Jongho; Jin, Ho

    2015-01-01

    In this paper, we investigated the electron Debye scale Kelvin-Helmholtz (KH) instability using two-dimensional electrostatic particle-in-cell simulations. We introduced a velocity shear layer with a thickness comparable to the electron Debye length and examined the generation of the KH instability. The KH instability occurs in a similar manner as observed in the KH instabilities in fluid or ion scales producing surface waves and rolled-up vortices. The strength and growth rate of the electron Debye scale KH instability is affected by the structure of the velocity shear layer. The strength depends on the magnitude of the velocity and the growth rate on the velocity gradient of the shear layer. However, the development of the electron Debye scale KH instability is mainly determined by the electric field generated by charge separation. Significant mixing of electrons occurs across the shear layer, and a fraction of electrons can penetrate deeply into the opposite side fairly far from the vortices across the shear layer

  15. Three-dimensional simulations of MHD disk winds to hundred AU scale from the protostar

    Directory of Open Access Journals (Sweden)

    Staff Jan

    2014-01-01

    Full Text Available We present the results of four, large scale, three-dimensional magnetohydrodynamics simulations of jets launched from a Keplerian accretion disk. The jets are followed from the source out to 90 AU, a scale that covers several pixels of HST images of nearby protostellar jets. The four simulations analyzed are for four different initial magnetic field configuration threading the surface of the accretion disk with varying degree of openness of the field lines. Our simulations show that jets are heated along their length by many shocks and we compute the line emission that is produced. We find excellent agreement with the observations and use these diagnostics to discriminate between different magnetic field configurations. A two-component jet emerges in simulations with less open field lines along the disk surface. The two-components are physically and dynamically separated with an inner fast and rotating jet and an outer slow jet. The second component weakens and eventually only one-component jet (i.e. only the inner jet is obtained for the most open field configurations. In all of our simulations we find that the faster inner component inherits the Keplerian profile and preserves it to large distances from the source. On the other hand, the outer component is associated with velocity gradients mimicking rotation.

  16. Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems

    Science.gov (United States)

    Omelchenko, Yuri; Karimabadi, Homayoun

    2005-10-01

    Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.

  17. Simulating multi-scale oceanic processes around Taiwan on unstructured grids

    Science.gov (United States)

    Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai

    2017-11-01

    We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.

  18. Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements

    International Nuclear Information System (INIS)

    Kim, Sang-Koog

    2010-01-01

    Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.

  19. Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements

    Science.gov (United States)

    Kim, Sang-Koog

    2010-07-01

    Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.

  20. Validation and Simulation of Ares I Scale Model Acoustic Test - 2 - Simulations at 5 Foot Elevation for Evaluation of Launch Mount Effects

    Science.gov (United States)

    Strutzenberg, Louise L.; Putman, Gabriel C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Expanding from initial simulations of the ASMAT setup in a held down configuration, simulations have been performed using the Loci/CHEM computational fluid dynamics software for ASMAT tests of the vehicle at 5 ft. elevation (100 ft. real vehicle elevation) with worst case drift in the direction of the launch tower. These tests have been performed without water suppression and have compared the acoustic emissions for launch structures with and without launch mounts. In addition, simulation results have also been compared to acoustic and imagery data collected from similar live-fire tests to assess the accuracy of the simulations. Simulations have shown a marked change in the pattern of emissions after removal of the launch mount with a reduction in the overall acoustic environment experienced by the vehicle and the formation of highly directed acoustic waves moving across the platform deck. Comparisons of simulation results to live-fire test data showed good amplitude and temporal correlation and imagery comparisons over the visible and infrared wavelengths showed qualitative capture of all plume and pressure wave evolution features.

  1. Cancer genetics meets biomolecular mechanism-bridging an age-old gulf.

    Science.gov (United States)

    González-Sánchez, Juan Carlos; Raimondi, Francesco; Russell, Robert B

    2018-02-01

    Increasingly available genomic sequencing data are exploited to identify genes and variants contributing to diseases, particularly cancer. Traditionally, methods to find such variants have relied heavily on allele frequency and/or familial history, often neglecting to consider any mechanistic understanding of their functional consequences. Thus, while the set of known cancer-related genes has increased, for many, their mechanistic role in the disease is not completely understood. This issue highlights a wide gap between the disciplines of genetics, which largely aims to correlate genetic events with phenotype, and molecular biology, which ultimately aims at a mechanistic understanding of biological processes. Fortunately, new methods and several systematic studies have proved illuminating for many disease genes and variants by integrating sequencing with mechanistic data, including biomolecular structures and interactions. These have provided new interpretations for known mutations and suggested new disease-relevant variants and genes. Here, we review these approaches and discuss particular examples where these have had a profound impact on the understanding of human cancers. © 2018 Federation of European Biochemical Societies.

  2. Scale issues in soil hydrology related to measurement and simulation: A case study in Colorado

    Science.gov (United States)

    State variables, such as soil water content (SWC), are typically measured or inferred at very small scales while being simulated at larger scales relevant to spatial management or hillslope areas. Thus there is an implicit spatial disparity that is often ignored. Surface runoff, on the other hand, ...

  3. A priori analysis of differential diffusion for model development for scale-resolving simulations

    Science.gov (United States)

    Hunger, Franziska; Dietzsch, Felix; Gauding, Michael; Hasse, Christian

    2018-01-01

    The present study analyzes differential diffusion and the mechanisms responsible for it with regard to the turbulent/nonturbulent interface (TNTI) with special focus on model development for scale-resolving simulations. In order to analyze differences between resolved and subfilter phenomena, direct numerical simulation (DNS) data are compared with explicitly filtered data. The DNS database stems from a temporally evolving turbulent plane jet transporting two passive scalars with Schmidt numbers of unity and 0.25 presented by Hunger et al. [F. Hunger et al., J. Fluid Mech. 802, R5 (2016), 10.1017/jfm.2016.471]. The objective of this research is twofold: (i) to compare the position of the turbulent-nonturbulent interface between the original DNS data and the filtered data and (ii) to analyze differential diffusion and the impact of the TNTI with regard to scale resolution in the filtered DNS data. For the latter, differential diffusion quantities are studied, clearly showing the decrease of differential diffusion at the resolved scales with increasing filter width. A transport equation for the scalar differences is evaluated. Finally, the existence of large scalar gradients, gradient alignment, and the diffusive fluxes being the physical mechanisms responsible for the separation of the two scalars are compared between the resolved and subfilter scales.

  4. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    Science.gov (United States)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well

  5. Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate

    Science.gov (United States)

    Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan

    2017-08-01

    Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. ‘scatter-tails’. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the ‘scatter-tails’. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68Ga-PSMA scan, and 23 whole-body 18F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical ‘halo’ artifacts that are often observed in the vicinity of high focal uptake regions.

  6. Overcoming the solubility limit with solubility-enhancement tags: successful applications in biomolecular NMR studies

    International Nuclear Information System (INIS)

    Zhou Pei; Wagner, Gerhard

    2010-01-01

    Although the rapid progress of NMR technology has significantly expanded the range of NMR-trackable systems, preparation of NMR-suitable samples that are highly soluble and stable remains a bottleneck for studies of many biological systems. The application of solubility-enhancement tags (SETs) has been highly effective in overcoming solubility and sample stability issues and has enabled structural studies of important biological systems previously deemed unapproachable by solution NMR techniques. In this review, we provide a brief survey of the development and successful applications of the SET strategy in biomolecular NMR. We also comment on the criteria for choosing optimal SETs, such as for differently charged target proteins, and recent new developments on NMR-invisible SETs.

  7. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  8. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  9. Solving the 0/1 Knapsack Problem by a Biomolecular DNA Computer

    Directory of Open Access Journals (Sweden)

    Hassan Taghipour

    2013-01-01

    Full Text Available Solving some mathematical problems such as NP-complete problems by conventional silicon-based computers is problematic and takes so long time. DNA computing is an alternative method of computing which uses DNA molecules for computing purposes. DNA computers have massive degrees of parallel processing capability. The massive parallel processing characteristic of DNA computers is of particular interest in solving NP-complete and hard combinatorial problems. NP-complete problems such as knapsack problem and other hard combinatorial problems can be easily solved by DNA computers in a very short period of time comparing to conventional silicon-based computers. Sticker-based DNA computing is one of the methods of DNA computing. In this paper, the sticker based DNA computing was used for solving the 0/1 knapsack problem. At first, a biomolecular solution space was constructed by using appropriate DNA memory complexes. Then, by the application of a sticker-based parallel algorithm using biological operations, knapsack problem was resolved in polynomial time.

  10. Piezoelectric tuning fork biosensors for the quantitative measurement of biomolecular interactions

    International Nuclear Information System (INIS)

    Gonzalez, Laura; Maria Benito, Angel; Puig-Vidal, Manel; Otero, Jorge; Rodrigues, Mafalda; Pérez-García, Lluïsa

    2015-01-01

    The quantitative measurement of biomolecular interactions is of great interest in molecular biology. Atomic force microscopy (AFM) has proved its capacity to act as a biosensor and determine the affinity between biomolecules of interest. Nevertheless, the detection scheme presents certain limitations when it comes to developing a compact biosensor. Recently, piezoelectric quartz tuning forks (QTFs) have been used as laser-free detection sensors for AFM. However, only a few studies along these lines have considered soft biological samples, and even fewer constitute quantified molecular recognition experiments. Here, we demonstrate the capacity of QTF probes to perform specific interaction measurements between biotin–streptavidin complexes in buffer solution. We propose in this paper a variant of dynamic force spectroscopy based on representing adhesion energies E (aJ) against pulling rates v (nm s"–"1). Our results are compared with conventional AFM measurements and show the great potential of these sensors in molecular interaction studies. (paper)

  11. XML-based approaches for the integration of heterogeneous bio-molecular data.

    Science.gov (United States)

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  12. Numerical simulation of small-scale mixing processes in the upper ocean and atmospheric boundary layer

    International Nuclear Information System (INIS)

    Druzhinin, O; Troitskaya, Yu; Zilitinkevich, S

    2016-01-01

    The processes of turbulent mixing and momentum and heat exchange occur in the upper ocean at depths up to several dozens of meters and in the atmospheric boundary layer within interval of millimeters to dozens of meters and can not be resolved by known large- scale climate models. Thus small-scale processes need to be parameterized with respect to large scale fields. This parameterization involves the so-called bulk coefficients which relate turbulent fluxes with large-scale fields gradients. The bulk coefficients are dependent on the properties of the small-scale mixing processes which are affected by the upper-ocean stratification and characteristics of surface and internal waves. These dependencies are not well understood at present and need to be clarified. We employ Direct Numerical Simulation (DNS) as a research tool which resolves all relevant flow scales and does not require closure assumptions typical of Large-Eddy and Reynolds Averaged Navier-Stokes simulations (LES and RANS). Thus DNS provides a solid ground for correct parameterization of small-scale mixing processes and also can be used for improving LES and RANS closure models. In particular, we discuss the problems of the interaction between small-scale turbulence and internal gravity waves propagating in the pycnocline in the upper ocean as well as the impact of surface waves on the properties of atmospheric boundary layer over wavy water surface. (paper)

  13. Hybrid Quantum Mechanics/Molecular Mechanics/Coarse Grained Modeling: A Triple-Resolution Approach for Biomolecular Systems.

    Science.gov (United States)

    Sokkar, Pandian; Boulanger, Eliot; Thiel, Walter; Sanchez-Garcia, Elsa

    2015-04-14

    We present a hybrid quantum mechanics/molecular mechanics/coarse-grained (QM/MM/CG) multiresolution approach for solvated biomolecular systems. The chemically important active-site region is treated at the QM level. The biomolecular environment is described by an atomistic MM force field, and the solvent is modeled with the CG Martini force field using standard or polarizable (pol-CG) water. Interactions within the QM, MM, and CG regions, and between the QM and MM regions, are treated in the usual manner, whereas the CG-MM and CG-QM interactions are evaluated using the virtual sites approach. The accuracy and efficiency of our implementation is tested for two enzymes, chorismate mutase (CM) and p-hydroxybenzoate hydroxylase (PHBH). In CM, the QM/MM/CG potential energy scans along the reaction coordinate yield reaction energies that are too large, both for the standard and polarizable Martini CG water models, which can be attributed to adverse effects of using large CG water beads. The inclusion of an atomistic MM water layer (10 Å for uncharged CG water and 5 Å for polarizable CG water) around the QM region improves the energy profiles compared to the reference QM/MM calculations. In analogous QM/MM/CG calculations on PHBH, the use of the pol-CG description for the outer water does not affect the stabilization of the highly charged FADHOOH-pOHB transition state compared to the fully atomistic QM/MM calculations. Detailed performance analysis in a glycine-water model system indicates that computation times for QM energy and gradient evaluations at the density functional level are typically reduced by 40-70% for QM/MM/CG relative to fully atomistic QM/MM calculations.

  14. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    Science.gov (United States)

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  15. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  16. Knowledge environments representing molecular entities for the virtual physiological human.

    Science.gov (United States)

    Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M

    2008-09-13

    In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.

  17. Multi-scale simulations of droplets in generic time-dependent flows

    Science.gov (United States)

    Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico

    2017-11-01

    We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.

  18. A Novel Multi-scale Simulation Strategy for Turbulent Reacting Flows

    Energy Technology Data Exchange (ETDEWEB)

    James, Sutherland [University of Utah

    2018-04-12

    Abstract In this project, a new methodology was proposed to bridge the gap between Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES). This novel methodology, titled Lattice-Based Multiscale Simulation (LBMS), creates a lattice structure of One-Dimensional Turbulence (ODT) models. This model has been shown to capture turbulent combustion with high fidelity by fully resolving interactions between turbulence and diffusion. By creating a lattice of ODT models, which are then coupled, LBMS overcomes the shortcomings of ODT, which are its inability to capture large scale three dimensional flow structures. However, by spacing these lattices significantly apart, LBMS can avoid the curse of dimensionality that creates untenable computational costs associated with DNS. This project has shown that LBMS is capable of reproducing statistics of isotropic turbulent flows while coarsening the spacing between lines significantly. It also investigates and resolves issues that arise when coupling ODT lines, such as flux reconstruction perpendicular to a given ODT line, preservation of conserved quantities when eddies cross a course cell volume and boundary condition application. Robust parallelization is also investigated.

  19. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    Science.gov (United States)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  20. ROSA-IV Large Scale Test Facility (LSTF) system description for second simulated fuel assembly

    International Nuclear Information System (INIS)

    1990-10-01

    The ROSA-IV Program's Large Scale Test Facility (LSTF) is a test facility for integral simulation of thermal-hydraulic response of a pressurized water reactor (PWR) during small break loss-of-coolant accidents (LOCAs) and transients. In this facility, the PWR core nuclear fuel rods are simulated using electric heater rods. The simulated fuel assembly which was installed during the facility construction was replaced with a new one in 1988. The first test with this second simulated fuel assembly was conducted in December 1988. This report describes the facility configuration and characteristics as of this date (December 1988) including the new simulated fuel assembly design and the facility changes which were made during the testing with the first assembly as well as during the renewal of the simulated fuel assembly. (author)

  1. Numerical Simulation of a Laboratory-Scale Turbulent SlotFlame

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski,Michael J.; Driscoll, James F.; Filatyev, Sergei A.

    2006-04-20

    We present three-dimensional, time-dependent simulations ofthe flowfield of a laboratory-scale slot burner. The simulations areperformed using an adaptive time-dependent low Mach number combustionalgorithm based on a second-order projection formulation that conservesboth species mass and total enthalpy. The methodology incorporatesdetailed chemical kinetics and a mixture model for differential speciesdiffusion. Methane chemistry and transport are modeled using the DRM-19mechanism along with its associated thermodynamics and transportdatabases. Adaptive mesh refinementdynamically resolves the flame andturbulent structures. Detailedcomparisons with experimental measurementsshow that the computational results provide a good prediction of theflame height, the shape of the time-averaged parabolic flame surfacearea, and the global consumption speed (the volume per second ofreactants consumed divided by the area of the time-averaged flame). Thethickness of the computed flamebrush increases in the streamwisedirection, and the flamesurface density profiles display the same generalshapes as the experiment. The structure of the simulated flame alsomatches the experiment; reaction layers are thin (typically thinner than1 mm) and the wavelengths of large wrinkles are 5--10 mm. Wrinklesamplify to become long fingers of reactants which burn through at a neckregion, forming isolated pockets of reactants. Thus both the simulatedflame and the experiment are in the "corrugated flameletregime."

  2. PARSEC-SCALE FARADAY ROTATION MEASURES FROM GENERAL RELATIVISTIC MAGNETOHYDRODYNAMIC SIMULATIONS OF ACTIVE GALACTIC NUCLEUS JETS

    International Nuclear Information System (INIS)

    Broderick, Avery E.; McKinney, Jonathan C.

    2010-01-01

    It is now possible to compare global three-dimensional general relativistic magnetohydrodynamic (GRMHD) jet formation simulations directly to multi-wavelength polarized VLBI observations of the pc-scale structure of active galactic nucleus (AGN) jets. Unlike the jet emission, which requires post hoc modeling of the nonthermal electrons, the Faraday rotation measures (RMs) depend primarily upon simulated quantities and thus provide a direct way to confront simulations with observations. We compute RM distributions of a three-dimensional global GRMHD jet formation simulation, extrapolated in a self-consistent manner to ∼10 pc scales, and explore the dependence upon model and observational parameters, emphasizing the signatures of structures generic to the theory of MHD jets. With typical parameters, we find that it is possible to reproduce the observed magnitudes and many of the structures found in AGN jet RMs, including the presence of transverse RM gradients. In our simulations, the RMs are generated in the circum-jet material, hydrodynamically a smooth extension of the jet itself, containing ordered toroidally dominated magnetic fields. This results in a particular bilateral morphology that is unlikely to arise due to Faraday rotation in distant foreground clouds. However, critical to efforts to probe the Faraday screen will be resolving the transverse jet structure. Therefore, the RMs of radio cores may not be reliable indicators of the properties of the rotating medium. Finally, we are able to constrain the particle content of the jet, finding that at pc scales AGN jets are electromagnetically dominated, with roughly 2% of the comoving energy in nonthermal leptons and much less in baryons.

  3. PARSEC-SCALE FARADAY ROTATION MEASURES FROM GENERAL RELATIVISTIC MAGNETOHYDRODYNAMIC SIMULATIONS OF ACTIVE GALACTIC NUCLEUS JETS

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Avery E [Canadian Institute for Theoretical Astrophysics, 60 St. George St., Toronto, ON M5S 3H8 (Canada); McKinney, Jonathan C., E-mail: aeb@cita.utoronto.c, E-mail: jmckinne@stanford.ed [Department of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94305-4060 (United States)

    2010-12-10

    It is now possible to compare global three-dimensional general relativistic magnetohydrodynamic (GRMHD) jet formation simulations directly to multi-wavelength polarized VLBI observations of the pc-scale structure of active galactic nucleus (AGN) jets. Unlike the jet emission, which requires post hoc modeling of the nonthermal electrons, the Faraday rotation measures (RMs) depend primarily upon simulated quantities and thus provide a direct way to confront simulations with observations. We compute RM distributions of a three-dimensional global GRMHD jet formation simulation, extrapolated in a self-consistent manner to {approx}10 pc scales, and explore the dependence upon model and observational parameters, emphasizing the signatures of structures generic to the theory of MHD jets. With typical parameters, we find that it is possible to reproduce the observed magnitudes and many of the structures found in AGN jet RMs, including the presence of transverse RM gradients. In our simulations, the RMs are generated in the circum-jet material, hydrodynamically a smooth extension of the jet itself, containing ordered toroidally dominated magnetic fields. This results in a particular bilateral morphology that is unlikely to arise due to Faraday rotation in distant foreground clouds. However, critical to efforts to probe the Faraday screen will be resolving the transverse jet structure. Therefore, the RMs of radio cores may not be reliable indicators of the properties of the rotating medium. Finally, we are able to constrain the particle content of the jet, finding that at pc scales AGN jets are electromagnetically dominated, with roughly 2% of the comoving energy in nonthermal leptons and much less in baryons.

  4. Theoretical and Numerical Properties of a Gyrokinetic Plasma: Issues Related to Transport Time Scale Simulation

    International Nuclear Information System (INIS)

    Lee, W.W.

    2003-01-01

    Particle simulation has played an important role for the recent investigations on turbulence in magnetically confined plasmas. In this paper, theoretical and numerical properties of a gyrokinetic plasma as well as its relationship with magnetohydrodynamics (MHD) are discussed with the ultimate aim of simulating microturbulence in transport time scale using massively parallel computers

  5. Frequency-scanning MALDI linear ion trap mass spectrometer for large biomolecular ion detection.

    Science.gov (United States)

    Lu, I-Chung; Lin, Jung Lee; Lai, Szu-Hsueh; Chen, Chung-Hsuan

    2011-11-01

    This study presents the first report on the development of a matrix-assisted laser desorption ionization (MALDI) linear ion trap mass spectrometer for large biomolecular ion detection by frequency scan. We designed, installed, and tested this radio frequency (RF) scan linear ion trap mass spectrometer and its associated electronics to dramatically extend the mass region to be detected. The RF circuit can be adjusted from 300 to 10 kHz with a set of operation amplifiers. To trap the ions produced by MALDI, a high pressure of helium buffer gas was employed to quench extra kinetic energy of the heavy ions produced by MALDI. The successful detection of the singly charged secretory immunoglobulin A ions indicates that the detectable mass-to-charge ratio (m/z) of this system can reach ~385 000 or beyond.

  6. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues

    Directory of Open Access Journals (Sweden)

    Michele Farisco

    2018-04-01

    Full Text Available Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs, e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.

  7. Dislocations and elementary processes of plasticity in FCC metals: atomic scale simulations; Dislocations et processus elementaires de la plasticite dans les metaux CFC: apports des simulations a l'echelle atomique

    Energy Technology Data Exchange (ETDEWEB)

    Rodney, D

    2000-07-01

    We present atomic-scale simulations of two elementary processes of FCC crystal plasticity. The first study consists in the simulation by molecular dynamics, in a nickel crystal, of the interactions between an edge dislocation and glissile interstitial loops of the type that form under irradiation in displacement cascades. The simulations show various atomic-scale interaction processes leading to the absorption and drag of the loops by the dislocation. These reactions certainly contribute to the formation of the 'clear bands' observed in deformed irradiated materials. The simulations also allow to study quantitatively the role of the glissile loops in irradiation hardening. In particular, dislocation unpinning stresses for certain pinning mechanisms are evaluated from the simulations. The second study consists first in the generalization in three dimensions of the quasi-continuum method (QCM), a multi-scale simulation method which couples atomistic techniques and the finite element method. In the QCM, regions close to dislocation cores are simulated at the atomic-scale while the rest of the crystal is simulated with a lower resolution by means of a discretization of the displacement fields using the finite element method. The QCM is then tested on the simulation of the formation and breaking of dislocation junctions in an aluminum crystal. Comparison of the simulations with an elastic model of dislocation junctions shows that the structure and strength of the junctions are dominated by elastic line tension effects, as is assumed in classical theories. (author)

  8. The mechanical design and simulation of a scaled H{sup −} Penning ion source

    Energy Technology Data Exchange (ETDEWEB)

    Rutter, T., E-mail: theo.rutter@stfc.ac.uk; Faircloth, D.; Turner, D.; Lawrie, S. [Rutherford Appleton Laboratory, Didcot OX110QX (United Kingdom)

    2016-02-15

    The existing ISIS Penning H{sup −} source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.

  9. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva

    1998-07-01

    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  10. Simulation for scale-up of a confined jet mixer for continuous hydrothermal flow synthesis of nanomaterials

    OpenAIRE

    Ma, CY; Liu, JJ; Zhang, Y; Wang, XZ

    2015-01-01

    Reactor performance of confined jet mixers for continuous hydrothermal flow synthesis of nanomaterials is investigated for the purpose of scale-up from laboratory scale to pilot-plant scale. Computational fluid dynamics (CFD) models were applied to simulate hydrothermal fluid flow, mixing and heat transfer behaviours in the reactors at different volumetric scale-up ratios (up to 26 times). The distributions of flow and heat transfer variables were obtained using ANSYS Fluent with the tracer c...

  11. Asymmetric fluid criticality. II. Finite-size scaling for simulations.

    Science.gov (United States)

    Kim, Young C; Fisher, Michael E

    2003-10-01

    The vapor-liquid critical behavior of intrinsically asymmetric fluids is studied in finite systems of linear dimensions L focusing on periodic boundary conditions, as appropriate for simulations. The recently propounded "complete" thermodynamic (L--> infinity) scaling theory incorporating pressure mixing in the scaling fields as well as corrections to scaling [Phys. Rev. E 67, 061506 (2003)] is extended to finite L, initially in a grand canonical representation. The theory allows for a Yang-Yang anomaly in which, when L--> infinity, the second temperature derivative (d2musigma/dT2) of the chemical potential along the phase boundary musigmaT diverges when T-->Tc-. The finite-size behavior of various special critical loci in the temperature-density or (T,rho) plane, in particular, the k-inflection susceptibility loci and the Q-maximal loci--derived from QL(T,L) is identical with 2L/L where m is identical with rho-L--is carefully elucidated and shown to be of value in estimating Tc and rhoc. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte including an estimate of the correlation exponent nu that confirms Ising-type character. The treatment is extended to the canonical representation where further complications appear.

  12. Proceedings of joint meeting of the 6th simulation science symposium and the NIFS collaboration research 'large scale computer simulation'

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-03-01

    Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)

  13. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    DEFF Research Database (Denmark)

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.

    2016-01-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...... the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads...

  14. pH in atomic scale simulations of electrochemical interfaces

    DEFF Research Database (Denmark)

    Rossmeisl, Jan; Chan, Karen; Ahmed, Rizwan

    2013-01-01

    Electrochemical reaction rates can strongly depend on pH, and there is increasing interest in electrocatalysis in alkaline solution. To date, no method has been devised to address pH in atomic scale simulations. We present a simple method to determine the atomic structure of the metal......|solution interface at a given pH and electrode potential. Using Pt(111)|water as an example, we show the effect of pH on the interfacial structure, and discuss its impact on reaction energies and barriers. This method paves the way for ab initio studies of pH effects on the structure and electrocatalytic activity...

  15. Large-scale agent-based social simulation : A study on epidemic prediction and control

    NARCIS (Netherlands)

    Zhang, M.

    2016-01-01

    Large-scale agent-based social simulation is gradually proving to be a versatile methodological approach for studying human societies, which could make contributions from policy making in social science, to distributed artificial intelligence and agent technology in computer science, and to theory

  16. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  17. A small-scale dynamo in feedback-dominated galaxies - III. Cosmological simulations

    Science.gov (United States)

    Rieder, Michael; Teyssier, Romain

    2017-12-01

    Magnetic fields are widely observed in the Universe in virtually all astrophysical objects, from individual stars to entire galaxies, even in the intergalactic medium, but their specific genesis has long been debated. Due to the development of more realistic models of galaxy formation, viable scenarios are emerging to explain cosmic magnetism, thanks to both deeper observations and more efficient and accurate computer simulations. We present here a new cosmological high-resolution zoom-in magnetohydrodynamic (MHD) simulation, using the adaptive mesh refinement technique, of a dwarf galaxy with an initially weak and uniform magnetic seed field that is amplified by a small-scale dynamo (SSD) driven by supernova-induced turbulence. As first structures form from the gravitational collapse of small density fluctuations, the frozen-in magnetic field separates from the cosmic expansion and grows through compression. In a second step, star formation sets in and establishes a strong galactic fountain, self-regulated by supernova explosions. Inside the galaxy, the interstellar medium becomes highly turbulent, dominated by strong supersonic shocks, as demonstrated by the spectral analysis of the gas kinetic energy. In this turbulent environment, the magnetic field is quickly amplified via a SSD process and is finally carried out into the circumgalactic medium by a galactic wind. This realistic cosmological simulation explains how initially weak magnetic seed fields can be amplified quickly in early, feedback-dominated galaxies, and predicts, as a consequence of the SSD process, that high-redshift magnetic fields are likely to be dominated by their small-scale components.

  18. Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame

    Energy Technology Data Exchange (ETDEWEB)

    Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)

    2010-09-15

    Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)

  19. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  20. Multi-scale simulation of single crystal hollow turbine blade manufactured by liquid metal cooling process

    Directory of Open Access Journals (Sweden)

    Xuewei Yan

    2018-02-01

    Full Text Available Liquid metal cooling (LMC process as a powerful directional solidification (DS technique is prospectively used to manufacture single crystal (SC turbine blades. An understanding of the temperature distribution and microstructure evolution in LMC process is required in order to improve the properties of the blades. For this reason, a multi-scale model coupling with the temperature field, grain growth and solute diffusion was established. The temperature distribution and mushy zone evolution of the hollow blade was simulated and discussed. According to the simulation results, the mushy zone might be convex and ahead of the ceramic beads at a lower withdrawal rate, while it will be concave and laggard at a higher withdrawal rate, and a uniform and horizontal mushy zone will be formed at a medium withdrawal rate. Grain growth of the blade at different withdrawal rates was also investigated. Single crystal structures were all selected out at three different withdrawal rates. Moreover, mis-orientation of the grains at 8 mm/min reached ~30°, while it was ~5° and ~15° at 10 mm/min and 12 mm/min, respectively. The model for predicting dendritic morphology was verified by corresponding experiment. Large scale for 2D dendritic distribution in the whole sections was investigated by experiment and simulation, and they presented a well agreement with each other. Keywords: Hollow blade, Single crystal, Multi-scale simulation, Liquid metal cooling

  1. Commercial applications of large-scale Research and Development computer simulation technologies

    International Nuclear Information System (INIS)

    Kuok Mee Ling; Pascal Chen; Wen Ho Lee

    1998-01-01

    The potential commercial applications of two large-scale R and D computer simulation technologies are presented. One such technology is based on the numerical solution of the hydrodynamics equations, and is embodied in the two-dimensional Eulerian code EULE2D, which solves the hydrodynamic equations with various models for the equation of state (EOS), constitutive relations and fracture mechanics. EULE2D is an R and D code originally developed to design and analyze conventional munitions for anti-armor penetrations such as shaped charges, explosive formed projectiles, and kinetic energy rods. Simulated results agree very well with actual experiments. A commercial application presented here is the design and simulation of shaped charges for oil and gas well bore perforation. The other R and D simulation technology is based on the numerical solution of Maxwell's partial differential equations of electromagnetics in space and time, and is implemented in the three-dimensional code FDTD-SPICE, which solves Maxwell's equations in the time domain with finite-differences in the three spatial dimensions and calls SPICE for information when nonlinear active devices are involved. The FDTD method has been used in the radar cross-section modeling of military aircrafts and many other electromagnetic phenomena. The coupling of FDTD method with SPICE, a popular circuit and device simulation program, provides a powerful tool for the simulation and design of microwave and millimeter-wave circuits containing nonlinear active semiconductor devices. A commercial application of FDTD-SPICE presented here is the simulation of a two-element active antenna system. The simulation results and the experimental measurements are in excellent agreement. (Author)

  2. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balances

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-08-01

    Satellite-based data, such as vegetation type and fractional vegetation cover, are widely used in hydrologic models to prescribe the vegetation state in a study region. Dynamic global vegetation models (DGVM) simulate land surface hydrology. Incorporation of satellite-based data into a DGVM may enhance a model's ability to simulate land surface hydrology by reducing the task of model parameterization and providing distributed information on land characteristics. The objectives of this study are to (i) modify a DGVM for simulating land surface water balances; (ii) evaluate the modified model in simulating actual evapotranspiration (ET), soil moisture, and surface runoff at regional or watershed scales; and (iii) gain insight into the ability of both the original and modified model to simulate large spatial scale land surface hydrology. To achieve these objectives, we introduce the "LPJ-hydrology" (LH) model which incorporates satellite-based data into the Lund-Potsdam-Jena (LPJ) DGVM. To evaluate the model we ran LH using historical (1981-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells for the conterminous US and for the entire world using coarser climate and land cover data. We evaluated the simulated ET, soil moisture, and surface runoff using a set of observed or simulated data at different spatial scales. Our results demonstrate that spatial patterns of LH-simulated annual ET and surface runoff are in accordance with previously published data for the US; LH-modeled monthly stream flow for 12 major rivers in the US was consistent with observed values respectively during the years 1981-2006 (R2 > 0.46, p 0.52). The modeled mean annual discharges for 10 major rivers worldwide also agreed well (differences day method for snowmelt computation, the addition of the solar radiation effect on snowmelt enabled LH to better simulate monthly stream flow in winter and early spring for rivers located at mid-to-high latitudes. In addition, LH

  3. Nano-motion dynamics are determined by surface-tethered selectin mechanokinetics and bond formation.

    Directory of Open Access Journals (Sweden)

    Brian J Schmidt

    2009-12-01

    Full Text Available The interaction of proteins at cellular interfaces is critical for many biological processes, from intercellular signaling to cell adhesion. For example, the selectin family of adhesion receptors plays a critical role in trafficking during inflammation and immunosurveillance. Quantitative measurements of binding rates between surface-constrained proteins elicit insight into how molecular structural details and post-translational modifications contribute to function. However, nano-scale transport effects can obfuscate measurements in experimental assays. We constructed a biophysical simulation of the motion of a rigid microsphere coated with biomolecular adhesion receptors in shearing flow undergoing thermal motion. The simulation enabled in silico investigation of the effects of kinetic force dependence, molecular deformation, grouping adhesion receptors into clusters, surface-constrained bond formation, and nano-scale vertical transport on outputs that directly map to observable motions. Simulations recreated the jerky, discrete stop-and-go motions observed in P-selectin/PSGL-1 microbead assays with physiologic ligand densities. Motion statistics tied detailed simulated motion data to experimentally reported quantities. New deductions about biomolecular function for P-selectin/PSGL-1 interactions were made. Distributing adhesive forces among P-selectin/PSGL-1 molecules closely grouped in clusters was necessary to achieve bond lifetimes observed in microbead assays. Initial, capturing bond formation effectively occurred across the entire molecular contour length. However, subsequent rebinding events were enhanced by the reduced separation distance following the initial capture. The result demonstrates that vertical transport can contribute to an enhancement in the apparent bond formation rate. A detailed analysis of in silico motions prompted the proposition of wobble autocorrelation as an indicator of two-dimensional function. Insight into two

  4. CFD RANS Simulations on a Generic Conventional Scale Model Submarine: Comparison between Fluent and OpenFOAM

    Science.gov (United States)

    2015-09-01

    UNCLASSIFIED UNCLASSIFIED CFD RANS Simulations on a Generic Conventional Scale Model Submarine: Comparison between Fluent and OpenFOAM ... OpenFOAM to replace some of the Fluent simulations. The fidelity of the Fluent code has been carefully validated, but the accuracy of parts of the... OpenFOAM code have not been so extensively tested. To test the accuracy of the OpenFOAM software, CFD simulations have been performed on the DSTO

  5. Performance of a pilot-scale constructed wetland system for treating simulated ash basin water.

    Science.gov (United States)

    Dorman, Lane; Castle, James W; Rodgers, John H

    2009-05-01

    A pilot-scale constructed wetland treatment system (CWTS) was designed and built to decrease the concentration and toxicity of constituents of concern in ash basin water from coal-burning power plants. The CWTS was designed to promote the following treatment processes for metals and metalloids: precipitation as non-bioavailable sulfides, co-precipitation with iron oxyhydroxides, and adsorption onto iron oxides. Concentrations of Zn, Cr, Hg, As, and Se in simulated ash basin water were reduced by the CWTS to less than USEPA-recommended water quality criteria. The removal efficiency (defined as the percent concentration decrease from influent to effluent) was dependent on the influent concentration of the constituent, while the extent of removal (defined as the concentration of a constituent of concern in the CWTS effluent) was independent of the influent concentration. Results from toxicity experiments illustrated that the CWTS eliminated influent toxicity with regard to survival and reduced influent toxicity with regard to reproduction. Reduction in potential for scale formation and biofouling was achieved through treatment of the simulated ash basin water by the pilot-scale CWTS.

  6. Hydrodynamic simulations of long-scale-length two-plasmon–decay experiments at the Omega Laser Facility

    International Nuclear Information System (INIS)

    Hu, S. X.; Michel, D. T.; Edgell, D. H.; Froula, D. H.; Follett, R. K.; Goncharov, V. N.; Myatt, J. F.; Skupsky, S.; Yaakobi, B.

    2013-01-01

    Direct-drive–ignition designs with plastic CH ablators create plasmas of long density scale lengths (L n ≥ 500 μm) at the quarter-critical density (N qc ) region of the driving laser. The two-plasmon–decay (TPD) instability can exceed its threshold in such long-scale-length plasmas (LSPs). To investigate the scaling of TPD-induced hot electrons to laser intensity and plasma conditions, a series of planar experiments have been conducted at the Omega Laser Facility with 2-ns square pulses at the maximum laser energies available on OMEGA and OMEGA EP. Radiation–hydrodynamic simulations have been performed for these LSP experiments using the two-dimensional hydrocode draco. The simulated hydrodynamic evolution of such long-scale-length plasmas has been validated with the time-resolved full-aperture backscattering and Thomson-scattering measurements. draco simulations for CH ablator indicate that (1) ignition-relevant long-scale-length plasmas of L n approaching ∼400 μm have been created; (2) the density scale length at N qc scales as L n (μm)≃(R DPP ×I 1/4 /2); and (3) the electron temperature T e at N qc scales as T e (keV)≃0.95×√(I), with the incident intensity (I) measured in 10 14 W/cm 2 for plasmas created on both OMEGA and OMEGA EP configurations with different-sized (R DPP ) distributed phase plates. These intensity scalings are in good agreement with the self-similar model predictions. The measured conversion fraction of laser energy into hot electrons f hot is found to have a similar behavior for both configurations: a rapid growth [f hot ≃f c ×(G c /4) 6 for G c hot ≃f c ×(G c /4) 1.2 for G c ≥ 4, with the common wave gain is defined as G c =3 × 10 −2 ×I qc L n λ 0 /T e , where the laser intensity contributing to common-wave gain I qc , L n , T e at N qc , and the laser wavelength λ 0 are, respectively, measured in [10 14 W/cm 2 ], [μm], [keV], and [μm]. The saturation level f c is observed to be f c ≃ 10 –2 at around

  7. ATR-FTIR Spectroscopic Evidence for Biomolecular Phosphorus and Carboxyl Groups Facilitating Bacterial Adhesion to Iron Oxides

    Science.gov (United States)

    Parikh, Sanjai J.; Mukome, Fungai N.D.; Zhang, Xiaoming

    2014-01-01

    Attenuated total reflectance (ATR) Fourier transform infrared (FTIR) spectroscopy has been used to probe the binding of bacteria to hematite (α-Fe2O3) and goethite (α-FeOOH). In situ ATR-FTIR experiments with bacteria (Pseudomonas putida, P. aeruginosa, Escherichia coli), mixed amino acids, polypeptide extracts, deoxyribonucleic acid (DNA), and a suite of model compounds were conducted. These compounds represent carboxyl, catecholate, amide, and phosphate groups present in siderophores, amino acids, polysaccharides, phospholipids, and DNA. Due in part to the ubiquitous presence of carboxyl groups in biomolecules, numerous IR peaks corresponding to outer-sphere or unbound (1400 cm−1) and inner-sphere (1310-1320 cm−1) coordinated carboxyl groups are noted following reaction of bacteria and biomolecules with α-Fe2O3 and α-FeOOH. However, the data also reveal that the presence of low-level amounts (i.e., 0.45-0.79%) of biomolecular phosphorous groups result in strong IR bands at ~1043 cm−1, corresponding to inner-sphere Fe-O-P bonds, underscoring the importance of bacteria associated P-containing groups in biomolecule and cell adhesion. Spectral comparisons also reveal slightly greater P-O-Fe contributions for bacteria (Pseudomonad, E. coli) deposited on α-FeOOH, as compared to α-Fe2O3. This data demonstrates that slight differences in bacterial adhesion to Fe oxides can be attributed to bacterial species and Fe-oxide minerals. However, more importantly, the strong binding affinity of phosphate in all bacteria samples to both Fe-oxides results in the formation of inner-sphere Fe-O-P bonds, signifying the critical role of biomolecular P in the initiation of bacterial adhesion. PMID:24859052

  8. Zwitterionic Silane Copolymer for Ultra-Stable and Bright Biomolecular Probes Based on Fluorescent Quantum Dot Nanoclusters.

    Science.gov (United States)

    Dembele, Fatimata; Tasso, Mariana; Trapiella-Alfonso, Laura; Xu, Xiangzhen; Hanafi, Mohamed; Lequeux, Nicolas; Pons, Thomas

    2017-05-31

    Fluorescent semiconductor quantum dots (QDs) exhibit several unique properties that make them suitable candidates for biomolecular sensing, including high brightness, photostability, broad excitation, and narrow emission spectra. Assembling these QDs into robust and functionalizable nanosized clusters (QD-NSCs) can provide fluorescent probes that are several orders of magnitude brighter than individual QDs, thus allowing an even greater sensitivity of detection with simplified instrumentation. However, the formation of compact, antifouling, functionalizable, and stable QD-NSCs remains a challenging task, especially for a use at ultralow concentrations for single-molecule detection. Here, we describe the development of fluorescent QD-NSCs envisioned as a tool for fast and sensitive biomolecular recognition. First, QDs were assembled into very compact 100-150 nm diameter spherical aggregates; the final QD-NSCs were obtained by growing a cross-linked silica shell around these aggregates. Hydrolytic stability in several concentration and pH conditions is a key requirement for a potential and efficient single-molecule detection tool. However, the hydrolysis of Si-O-Si bonds leads to desorption of monosilane-based surface groups at very low silica concentrations or in a slightly basic medium. Thus, we designed a novel multidentate copolymer composed of multiple silane as well as zwitterionic monomers. Coating silica beads with this multidentate copolymer provided a robust surface chemistry that was demonstrated to be stable against hydrolysis, even at low concentrations. Copolymer-coated silica beads also showed low fouling properties and high colloidal stability in saline solutions. Furthermore, incorporation of additional azido-monomers enabled easy functionalization of QD-NSCs using copper-free bio-orthogonal cyclooctyne-azide click chemistry, as demonstrated by a biotin-streptavidin affinity test.

  9. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  10. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  11. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    International Nuclear Information System (INIS)

    Rosa, B.; Parishani, H.; Ayala, O.; Wang, L.-P.

    2015-01-01

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynolds number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate

  12. Applications of atomic force microscopy to the studies of biomaterials in biomolecular systems

    Science.gov (United States)

    Ma, Xiang

    Atomic force microscopy (AFM) is a unique tool for the studies of nanoscale structures and interactions. In this dissertation, I applied AFM to study transitions among multiple states of biomaterials in three different microscopic biomolecular systems: MukB-dependent DNA condensation, holdfast adhesion, and virus elasticity. To elucidate the mechanism of MukB-dependent DNA condensation, I have studied the conformational changes of MukB proteins as indicators for the strength of interactions between MukB, DNA and other molecular factors, such as magnesium and ParC proteins, using high-resolution AFM imaging. To determine the physical origins of holdfast adhesion, I have investigated the dynamics of adhesive force development of the holdfast, employing AFM force spectroscopy. By measuring rupture forces between the holdfast and the substrate, I showed that the holdfast adhesion is strongly time-dependent and involves transformations at multiple time scales. Understanding the mechanisms of adhesion force development of the holdfast will be critical for future engineering of holdfasts properties for various applications. Finally, I have examined the elasticity of self-assembled hepatitis B virus-like particles (HBV VLPs) and brome mosaic virus (BMV) in response to changes of pH and salinity, using AFM nanoindentation. The distributions of elasticity were mapped on a single particle level and compared between empty, RNA- and gold-filled HBV VLPs. I found that a single HBV VLP showed heterogeneous distribution of elasticity and a two-step buckling transition, suggesting a discrete property of HBV capsids. For BMV, I have showed that viruses containing different RNA molecules can be distinguished by mechanical measurements, while they are indistinguishable by morphology. I also studied the effect of pH on the elastic behaviors of three-particle BMV and R3/4 BMV. This study can yield insights into RNA presentation/release mechanisms, and could help us to design novel drug

  13. Molecular-scale simulation of electroluminescence in a multilayer white organic light-emitting diode

    DEFF Research Database (Denmark)

    Mesta, Murat; Carvelli, Marco; de Vries, Rein J

    2013-01-01

    we show that it is feasible to carry out Monte Carlo simulations including all of these molecular-scale processes for a hybrid multilayer organic light-emitting diode combining red and green phosphorescent layers with a blue fluorescent layer. The simulated current density and emission profile......In multilayer white organic light-emitting diodes the electronic processes in the various layers--injection and motion of charges as well as generation, diffusion and radiative decay of excitons--should be concerted such that efficient, stable and colour-balanced electroluminescence can occur. Here...

  14. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    International Nuclear Information System (INIS)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois; Williams, Brian; Tome, Carlos

    2015-01-01

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy's resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  15. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Atamturktur, Sez [Clemson Univ., SC (United States); Unal, Cetin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-16

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  16. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  17. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    Science.gov (United States)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  18. Modeling and Simulation of Multi-scale Environmental Systems with Generalized Hybrid Petri Nets

    Directory of Open Access Journals (Sweden)

    Mostafa eHerajy

    2015-07-01

    Full Text Available Predicting and studying the dynamics and properties of environmental systems necessitates the construction and simulation of mathematical models entailing different levels of complexities. Such type of computational experiments often require the combination of discrete and continuous variables as well as processes operating at different time scales. Furthermore, the iterative steps of constructing and analyzing environmental models might involve researchers with different background. Hybrid Petri nets may contribute in overcoming such challenges as they facilitate the implementation of systems integrating discrete and continuous dynamics. Additionally, the visual depiction of model components will inevitably help to bridge the gap between scientists with distinct expertise working on the same problem. Thus, modeling environmental systems with hybrid Petri nets enables the construction of complex processes while keeping the models comprehensible for researchers working on the same project with significantly divergent education path. In this paper we propose the utilization of a special class of hybrid Petri nets, Generalized Hybrid Petri Nets (GHPN, to model and simulate environmental systems exposing processes interacting at different time-scales. GHPN integrate stochastic and deterministic semantics as well as other types of special basic events. Moreover, a case study is presented to illustrate the use of GHPN in constructing and simulating multi-timescale environmental scenarios.

  19. Immobilization of simulated high-level radioactive waste in borosilicate glass: Pilot scale demonstrations

    International Nuclear Information System (INIS)

    Ritter, J.A.; Hutson, N.D.; Zamecnik, J.R.; Carter, J.T.

    1991-01-01

    The Integrated DWPF Melter System (IDMS), operated by the Savannah River Laboratory, is a pilot scale facility used in support of the start-up and operation of the Department of Energy's Defense Waste Processing Facility. The IDMS has successfully demonstrated, on an engineering scale (one-fifth), that simulated high level radioactive waste (HLW) sludge can be chemically treated with formic acid to adjust both its chemical and physical properties, and then blended with simulated precipitate hydrolysis aqueous (PHA) product and borosilicate glass frit to produce a melter feed which can be processed into a durable glass product. The simulated sludge, PHA and frit were blended, based on a product composition program, to optimize the loading of the waste glass as well as to minimize those components which can cause melter processing and/or glass durability problems. During all the IDMS demonstrations completed thus far, the melter feed and the resulting glass that has been produced met all the required specifications, which is very encouraging to future DWPF operations. The IDMS operations also demonstrated that the volatile components of the melter feed (e.g., mercury, nitrogen and carbon, and, to a lesser extent, chlorine, fluorine and sulfur) did not adversely affect the melter performance or the glass product

  20. Cytoscape: a software environment for integrated models of biomolecular interaction networks.

    Science.gov (United States)

    Shannon, Paul; Markiel, Andrew; Ozier, Owen; Baliga, Nitin S; Wang, Jonathan T; Ramage, Daniel; Amin, Nada; Schwikowski, Benno; Ideker, Trey

    2003-11-01

    Cytoscape is an open source software project for integrating biomolecular interaction networks with high-throughput expression data and other molecular states into a unified conceptual framework. Although applicable to any system of molecular components and interactions, Cytoscape is most powerful when used in conjunction with large databases of protein-protein, protein-DNA, and genetic interactions that are increasingly available for humans and model organisms. Cytoscape's software Core provides basic functionality to layout and query the network; to visually integrate the network with expression profiles, phenotypes, and other molecular states; and to link the network to databases of functional annotations. The Core is extensible through a straightforward plug-in architecture, allowing rapid development of additional computational analyses and features. Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical/functional interaction network for Halobacterium, and an interface to detailed stochastic/kinetic gene regulatory models.

  1. REVIEW ARTICLE: How do biomolecular systems speed up and regulate rates?

    Science.gov (United States)

    Zhou, Huan-Xiang

    2005-09-01

    The viability of a biological system depends upon careful regulation of the rates of various processes. These rates have limits imposed by intrinsic chemical or physical steps (e.g., diffusion). These limits can be expanded by interactions and dynamics of the biomolecules. For example, (a) a chemical reaction is catalyzed when its transition state is preferentially bound to an enzyme; (b) the folding of a protein molecule is speeded up by specific interactions within the transition-state ensemble and may be assisted by molecular chaperones; (c) the rate of specific binding of a protein molecule to a cellular target can be enhanced by mechanisms such as long-range electrostatic interactions, nonspecific binding and folding upon binding; (d) directional movement of motor proteins is generated by capturing favorable Brownian motion through intermolecular binding energy; and (e) conduction and selectivity of ions through membrane channels are controlled by interactions and the dynamics of channel proteins. Simple physical models are presented here to illustrate these processes and provide a unifying framework for understanding speed attainment and regulation in biomolecular systems.

  2. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire.

    Science.gov (United States)

    Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran

    2017-02-01

    The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.

  3. Scaling up watershed model parameters--Flow and load simulations of the Edisto River Basin

    Science.gov (United States)

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    The Edisto River is the longest and largest river system completely contained in South Carolina and is one of the longest free flowing blackwater rivers in the United States. The Edisto River basin also has fish-tissue mercury concentrations that are some of the highest recorded in the United States. As part of an effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River basin, analyses and simulations of the hydrology of the Edisto River basin were made with the topography-based hydrological model (TOPMODEL). The potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River basin, was assessed. Scaling up was done in a step-wise process beginning with applying the calibration parameters, meteorological data, and topographic wetness index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made with subsequent simulations culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River basin and updated calibration parameters for some of the TOPMODEL calibration parameters. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the two models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the significant difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variables in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD-H, and LOADEST

  4. Uma abordagem visual para análise comparativa de redes biomoleculares com apoio de diagramas de Venn

    OpenAIRE

    Henry Heberle

    2014-01-01

    Sistemas biológicos podem ser representados por redes que armazenam não apenas informações de conectividade, mas também informações de características de seus nós. No contexto biomolecular, esses nós podem representar proteínas, metabólitos, entre outros tipos de moléculas. Cada molécula possui características anotadas e armazenadas em bases de dados como o Gene Ontology. A comparação visual dessas redes depende de ferramentas que permitam o usuário identificar diferenças e semelhanças entre ...

  5. Multi-scale modelling and numerical simulation of electronic kinetic transport

    International Nuclear Information System (INIS)

    Duclous, R.

    2009-11-01

    This research thesis which is at the interface between numerical analysis, plasma physics and applied mathematics, deals with the kinetic modelling and numerical simulations of the electron energy transport and deposition in laser-produced plasmas, having in view the processes of fuel assembly to temperature and density conditions necessary to ignite fusion reactions. After a brief review of the processes at play in the collisional kinetic theory of plasmas, with a focus on basic models and methods to implement, couple and validate them, the author focuses on the collective aspect related to the free-streaming electron transport equation in the non-relativistic limit as well as in the relativistic regime. He discusses the numerical development and analysis of the scheme for the Vlasov-Maxwell system, and the selection of a validation procedure and numerical tests. Then, he investigates more specific aspects of the collective transport: the multi-specie transport, submitted to phase-space discontinuities. Dealing with the multi-scale physics of electron transport with collision source terms, he validates the accuracy of a fast Monte Carlo multi-grid solver for the Fokker-Planck-Landau electron-electron collision operator. He reports realistic simulations for the kinetic electron transport in the frame of the shock ignition scheme, the development and validation of a reduced electron transport angular model. He finally explores the relative importance of the processes involving electron-electron collisions at high energy by means a multi-scale reduced model with relativistic Boltzmann terms

  6. A new framework for the analysis of continental-scale convection-resolving climate simulations

    Science.gov (United States)

    Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.

    2017-12-01

    High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393

  7. Numerical Simulation of the Time Evolution of Small-Scale Irregularities in the F-Layer Ionospheric Plasma

    Directory of Open Access Journals (Sweden)

    O. V. Mingalev

    2011-01-01

    Full Text Available Dynamics of magnetic field-aligned small-scale irregularities in the electron concentration, existing in the F-layer ionospheric plasma, is investigated with the help of a mathematical model. The plasma is assumed to be a rarefied compound consisting of electrons and positive ions and being in a strong, external magnetic field. In the applied model, kinetic processes in the plasma are simulated by using the Vlasov-Poisson system of equations. The system of equations is numerically solved applying a macroparticle method. The time evolution of a plasma irregularity, having initial cross-section dimension commensurable with a Debye length, is simulated during the period sufficient for the irregularity to decay completely. The results of simulation indicate that the small-scale irregularity, created initially in the F-region ionosphere, decays accomplishing periodic damped vibrations, with the process being collisionless.

  8. Properties important to mixing and simulant recommendations for WTP full-scale vessel testing

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-12-01

    Full Scale Vessel Testing (FSVT) is being planned by Bechtel National, Inc., to demonstrate the ability of the standard high solids vessel design (SHSVD) to meet mixing requirements over the range of fluid properties planned for processing in the Pretreatment Facility (PTF) of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. WTP personnel requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in FSVT. Among the tasks assigned to SRNL was to develop a list of waste properties that are important to pulse-jet mixer (PJM) performance in WTP vessels with elevated concentrations of solids.

  9. Versatile single-molecule multi-color excitation and detection fluorescence setup for studying biomolecular dynamics

    KAUST Repository

    Sobhy, M. A.

    2011-11-07

    Single-molecule fluorescence imaging is at the forefront of tools applied to study biomolecular dynamics both in vitro and in vivo. The ability of the single-molecule fluorescence microscope to conduct simultaneous multi-color excitation and detection is a key experimental feature that is under continuous development. In this paper, we describe in detail the design and the construction of a sophisticated and versatile multi-color excitation and emission fluorescence instrument for studying biomolecular dynamics at the single-molecule level. The setup is novel, economical and compact, where two inverted microscopes share a laser combiner module with six individual laser sources that extend from 400 to 640 nm. Nonetheless, each microscope can independently and in a flexible manner select the combinations, sequences, and intensities of the excitation wavelengths. This high flexibility is achieved by the replacement of conventional mechanical shutters with acousto-optic tunable filter (AOTF). The use of AOTF provides major advancement by controlling the intensities, duration, and selection of up to eight different wavelengths with microsecond alternation time in a transparent and easy manner for the end user. To our knowledge this is the first time AOTF is applied to wide-field total internal reflection fluorescence (TIRF) microscopy even though it has been commonly used in multi-wavelength confocal microscopy. The laser outputs from the combiner module are coupled to the microscopes by two sets of four single-mode optic fibers in order to allow for the optimization of the TIRF angle for each wavelength independently. The emission is split into two or four spectral channels to allow for the simultaneous detection of up to four different fluorophores of wide selection and using many possible excitation and photoactivation schemes. We demonstrate the performance of this new setup by conducting two-color alternating excitation single-molecule fluorescence resonance energy

  10. Phase sensitive spectral domain interferometry for label free biomolecular interaction analysis and biosensing applications

    Science.gov (United States)

    Chirvi, Sajal

    Biomolecular interaction analysis (BIA) plays vital role in wide variety of fields, which include biomedical research, pharmaceutical industry, medical diagnostics, and biotechnology industry. Study and quantification of interactions between natural biomolecules (proteins, enzymes, DNA) and artificially synthesized molecules (drugs) is routinely done using various labeled and label-free BIA techniques. Labeled BIA (Chemiluminescence, Fluorescence, Radioactive) techniques suffer from steric hindrance of labels on interaction site, difficulty of attaching labels to molecules, higher cost and time of assay development. Label free techniques with real time detection capabilities have demonstrated advantages over traditional labeled techniques. The gold standard for label free BIA is surface Plasmon resonance (SPR) that detects and quantifies the changes in refractive index of the ligand-analyte complex molecule with high sensitivity. Although SPR is a highly sensitive BIA technique, it requires custom-made sensor chips and is not well suited for highly multiplexed BIA required in high throughput applications. Moreover implementation of SPR on various biosensing platforms is limited. In this research work spectral domain phase sensitive interferometry (SD-PSI) has been developed for label-free BIA and biosensing applications to address limitations of SPR and other label free techniques. One distinct advantage of SD-PSI compared to other label-free techniques is that it does not require use of custom fabricated biosensor substrates. Laboratory grade, off-the-shelf glass or plastic substrates of suitable thickness with proper surface functionalization are used as biosensor chips. SD-PSI is tested on four separate BIA and biosensing platforms, which include multi-well plate, flow cell, fiber probe with integrated optics and fiber tip biosensor. Sensitivity of 33 ng/ml for anti-IgG is achieved using multi-well platform. Principle of coherence multiplexing for multi

  11. Local-Scale Simulations of Nucleate Boiling on Micrometer-Featured Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dede, Ercan M. [Toyota Research Institute of North America; Joshi, Shailesh N. [Toyota Research Institute of North America; Zhou, Feng [Toyota Research Institute of North America

    2017-07-12

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  12. Effects of Resolution on the Simulation of Boundary-layer Clouds and the Partition of Kinetic Energy to Subgrid Scales

    Directory of Open Access Journals (Sweden)

    Anning Cheng

    2010-02-01

    Full Text Available Seven boundary-layer cloud cases are simulated with UCLA-LES (The University of California, Los Angeles – large eddy simulation model with different horizontal and vertical gridspacing to investigate how the results depend on gridspacing. Some variables are more sensitive to horizontal gridspacing, while others are more sensitive to vertical gridspacing, and still others are sensitive to both horizontal and vertical gridspacings with similar or opposite trends. For cloud-related variables having the opposite dependence on horizontal and vertical gridspacings, changing the gridspacing proportionally in both directions gives the appearance of convergence. In this study, we mainly discuss the impact of subgrid-scale (SGS kinetic energy (KE on the simulations with coarsening of horizontal and vertical gridspacings. A running-mean operator is used to separate the KE of the high-resolution benchmark simulations into that of resolved scales of coarse-resolution simulations and that of SGSs. The diagnosed SGS KE is compared with that parameterized by the Smagorinsky-Lilly SGS scheme at various gridspacings. It is found that the parameterized SGS KE for the coarse-resolution simulations is usually underestimated but the resolved KE is unrealistically large, compared to benchmark simulations. However, the sum of resolved and SGS KEs is about the same for simulations with various gridspacings. The partitioning of SGS and resolved heat and moisture transports is consistent with that of SGS and resolved KE, which means that the parameterized transports are underestimated but resolved-scale transports are overestimated. On the whole, energy shifts to large-scales as the horizontal gridspacing becomes coarse, hence the size of clouds and the resolved circulation increase, the clouds become more stratiform-like with an increase in cloud fraction, cloud liquid-water path and surface precipitation; when coarse vertical gridspacing is used, cloud sizes do not

  13. Higher order moments of the matter distribution in scale-free cosmological simulations with large dynamic range

    Science.gov (United States)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1994-01-01

    We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.

  14. Large scale statistics for computational verification of grain growth simulations with experiments

    International Nuclear Information System (INIS)

    Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.

    2002-01-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the Electron Backscattered Diffraction (EBSD) measurements. Using the same technique, we obtained 5170-grain data from an Aluminum-film (120 (micro)m thick) with a columnar grain structure. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 C. Characterization of the structures and properties of grain boundary networks (GBN) to produce desirable microstructures is one of the fundamental problems in interface science. There is an ongoing research for the development of new experimental and analytical techniques in order to obtain and synthesize information related to GBN. The grain boundary energy and mobility data were characterized by Electron Backscattered Diffraction (EBSD) technique and Atomic Force Microscopy (AFM) observations (i.e., for ceramic MgO and for the metal Al). Grain boundary energies are extracted from triple junction (TJ) geometry considering the local equilibrium condition at TJ's. Relative boundary mobilities were also extracted from TJ's through a statistical/multiscale analysis. Additionally, there are recent theoretical developments of grain boundary evolution in microstructures. In this paper, a new technique for three-dimensional grain growth simulations was used to simulate interface migration

  15. Bayesian ensemble refinement by replica simulations and reweighting

    Science.gov (United States)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  16. Large-Scale Covariability Between Aerosol and Precipitation Over the 7-SEAS Region: Observations and Simulations

    Science.gov (United States)

    Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong; hide

    2012-01-01

    One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5

  17. Theory-based transport simulation of tokamaks: density scaling

    International Nuclear Information System (INIS)

    Ghanem, E.S.; Kinsey, J.; Singer, C.; Bateman, G.

    1992-01-01

    There has been a sizeable amount of work in the past few years using theoretically based flux-surface-average transport models to simulate various types of experimental tokamak data. Here we report two such studies, concentrating on the response of the plasma to variation of the line averaged electron density. The first study reported here uses a transport model described by Ghanem et al. to examine the response of global energy confinement time in ohmically heated discharges. The second study reported here uses a closely related and more recent transport model described by Bateman to examine the response of temperature profiles to changes in line-average density in neutral-beam-heated discharges. Work on developing a common theoretical model for these and other scaling experiments is in progress. (author) 5 refs., 2 figs

  18. Simulating Fine-Scale Marine Pollution Plumes for Autonomous Robotic Environmental Monitoring

    Directory of Open Access Journals (Sweden)

    Muhammad Fahad

    2018-05-01

    Full Text Available Marine plumes exhibit characteristics such as intermittency, sinuous structure, shape and flow field coherency, and a time varying concentration profile. Due to the lack of experimental quantification of these characteristics for marine plumes, existing work often assumes marine plumes exhibit behavior similar to aerial plumes and are commonly modeled by filament based Lagrangian models. Our previous field experiments with Rhodamine dye plumes at Makai Research Pier at Oahu, Hawaii revealed that marine plumes show similar characteristics to aerial plumes qualitatively, but quantitatively they are disparate. Based on the field data collected, this paper presents a calibrated Eulerian plume model that exhibits the qualitative and quantitative characteristics exhibited by experimentally generated marine plumes. We propose a modified model with an intermittent source, and implement it in a Robot Operating System (ROS based simulator. Concentration time series of stationary sampling points and dynamic sampling points across cross-sections and plume fronts are collected and analyzed for statistical parameters of the simulated plume. These parameters are then compared with statistical parameters from experimentally generated plumes. The comparison validates that the simulated plumes exhibit fine-scale qualitative and quantitative characteristics similar to experimental plumes. The ROS plume simulator facilitates future evaluations of environmental monitoring strategies by marine robots, and is made available for community use.

  19. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    OpenAIRE

    Sam Ali Al; Szasz Robert; Revstedt Johan

    2015-01-01

    The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...

  20. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balances

    Directory of Open Access Journals (Sweden)

    G. Tang

    2012-08-01

    Full Text Available Satellite-based data, such as vegetation type and fractional vegetation cover, are widely used in hydrologic models to prescribe the vegetation state in a study region. Dynamic global vegetation models (DGVM simulate land surface hydrology. Incorporation of satellite-based data into a DGVM may enhance a model's ability to simulate land surface hydrology by reducing the task of model parameterization and providing distributed information on land characteristics. The objectives of this study are to (i modify a DGVM for simulating land surface water balances; (ii evaluate the modified model in simulating actual evapotranspiration (ET, soil moisture, and surface runoff at regional or watershed scales; and (iii gain insight into the ability of both the original and modified model to simulate large spatial scale land surface hydrology. To achieve these objectives, we introduce the "LPJ-hydrology" (LH model which incorporates satellite-based data into the Lund-Potsdam-Jena (LPJ DGVM. To evaluate the model we ran LH using historical (1981–2006 climate data and satellite-based land covers at 2.5 arc-min grid cells for the conterminous US and for the entire world using coarser climate and land cover data. We evaluated the simulated ET, soil moisture, and surface runoff using a set of observed or simulated data at different spatial scales. Our results demonstrate that spatial patterns of LH-simulated annual ET and surface runoff are in accordance with previously published data for the US; LH-modeled monthly stream flow for 12 major rivers in the US was consistent with observed values respectively during the years 1981–2006 (R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficient > 0.52. The modeled mean annual discharges for 10 major rivers worldwide also agreed well (differences < 15% with observed values for these rivers. Compared to a degree-day method for snowmelt computation, the addition of the solar radiation effect on snowmelt

  1. Comparison of Waste Feed Delivery Small Scale Mixing Demonstration Simulant to Hanford Waste

    Energy Technology Data Exchange (ETDEWEB)

    Wells, Beric E.; Gauglitz, Phillip A.; Rector, David R.

    2012-07-10

    The Hanford double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions' Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems. A series of these tests have used a five-part simulant composed of particles of different size and density and designed to be equal or more challenging than AY-102 waste. This five-part simulant, however, has not been compared with the broad range of Hanford waste, and thus there is an additional uncertainty that this simulant may not be as challenging as the most difficult Hanford waste. The purpose of this study is to quantify how the current five-part simulant compares to all of the Hanford sludge waste, and to suggest alternate simulants that could be tested to reduce the uncertainty in applying the current testing results to potentially more challenging wastes.

  2. Modeling Coronal Mass Ejections with the Multi-Scale Fluid-Kinetic Simulation Suite

    International Nuclear Information System (INIS)

    Pogorelov, N. V.; Borovikov, S. N.; Wu, S. T.; Yalim, M. S.; Kryukov, I. A.; Colella, P. C.; Van Straalen, B.

    2017-01-01

    The solar eruptions and interacting solar wind streams are key drivers of geomagnetic storms and various related space weather disturbances that may have hazardous effects on the space-borne and ground-based technological systems as well as on human health. Coronal mass ejections (CMEs) and their interplanetary counterparts, interplanetary CMEs (ICMEs), belong to the strongest disturbances and therefore are of great importance for the space weather predictions. In this paper we show a few examples of how adaptive mesh refinement makes it possible to resolve the complex CME structure and its evolution in time while a CME propagates from the inner boundary to Earth. Simulations are performed with the Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS). (paper)

  3. Scaling up watershed model parameters: flow and load simulations of the Edisto River Basin, South Carolina, 2007-09

    Science.gov (United States)

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    As part of an ongoing effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin, analyses and simulations of the hydrology of the Edisto River Basin were made using the topography-based hydrological model (TOPMODEL). A primary focus of the investigation was to assess the potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River Basin. Scaling up was done in a step-wise manner, beginning with applying the calibration parameters, meteorological data, and topographic-wetness-index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made for subsequent simulations, culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River Basin and updated calibration parameters for some of the TOPMODEL calibration parameters. The scaling-up process resulted in nine simulations being made. Simulation 7 best matched the streamflows at station 02175000, Edisto River near Givhans, SC, which was the downstream limit for the TOPMODEL setup, and was obtained by adjusting the scaling factor, including streamflow routing, and using NEXRAD precipitation data for the Edisto River Basin. The Nash-Sutcliffe coefficient of model-fit efficiency and Pearson’s correlation coefficient for simulation 7 were 0.78 and 0.89, respectively. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the McTier Creek and Edisto River models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the substantial difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL

  4. Biomolecular Nano-Flow-Sensor to Measure Near-Surface Flow

    Directory of Open Access Journals (Sweden)

    Noji Hiroyuki

    2009-01-01

    Full Text Available Abstract We have proposed and experimentally demonstrated that the measurement of the near-surface flow at the interface between a liquid and solid using a 10 nm-sized biomolecular motor of F1-ATPase as a nano-flow-sensor. For this purpose, we developed a microfluidic test-bed chip to precisely control the liquid flow acting on the F1-ATPase. In order to visualize the rotation of F1-ATPase, several hundreds nanometer-sized particle was immobilized at the rotational axis of F1-ATPase to enhance the rotation to be detected by optical microscopy. The rotational motion of F1-ATPase, which was immobilized on an inner surface of the test-bed chip, was measured to obtain the correlation between the near-surface flow and the rotation speed of F1-ATPase. As a result, we obtained the relationship that the rotation speed of F1-ATPase was linearly decelerated with increasing flow velocity. The mechanism of the correlation between the rotation speed and the near-surface flow remains unclear, however the concept to use biomolecule as a nano-flow-sensor was proofed successfully. (See supplementary material 1 Electronic supplementary material The online version of this article (doi:10.1007/s11671-009-9479-3 contains supplementary material, which is available to authorized users. Click here for file

  5. Data for Figures and Tables in "Impacts of Different Characterizations of Large-Scale Background on Simulated Regional-Scale Ozone Over the Continental U.S."

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains the data used in the Figures and Tables of the manuscript "Impacts of Different Characterizations of Large-Scale Background on Simulated...

  6. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  7. Large-scale numerical simulations on two-phase flow behavior in a fuel bundle of RMWR with the earth simulator

    International Nuclear Information System (INIS)

    Kazuyuki, Takase; Hiroyuki, Yoshida; Hidesada, Tamai; Hajime, Akimoto; Yasuo, Ose

    2003-01-01

    Fluid flow characteristics in a fuel bundle of a reduced-moderation light water reactor (RMWR) with a tight-lattice core were analyzed numerically using a newly developed two-phase flow analysis code under the full bundle size condition. Conventional analysis methods such as sub-channel codes need composition equations based on the experimental data. In case that there are no experimental data regarding to the thermal-hydraulics in the tight-lattice core, therefore, it is difficult to obtain high prediction accuracy on the thermal design of the RMWR. Then the direct numerical simulations with the earth simulator were chosen. The axial velocity distribution in a fuel bundle changed sharply around a grid spacer and its quantitative evaluation was obtained from the present preliminary numerical study. The high prospect was acquired on the possibility of establishment of the thermal design procedure of the RMWR by large-scale direct simulations. (authors)

  8. Validation of a power-law noise model for simulating small-scale breast tissue

    International Nuclear Information System (INIS)

    Reiser, I; Edwards, A; Nishikawa, R M

    2013-01-01

    We have validated a small-scale breast tissue model based on power-law noise. A set of 110 patient images served as truth. The statistical model parameters were determined by matching the radially averaged power-spectrum of the projected simulated tissue with that of the central tomosynthesis patient breast projections. Observer performance in a signal-known exactly detection task in simulated and actual breast backgrounds was compared. Observers included human readers, a pre-whitening observer model and a channelized Hotelling observer model. For all observers, good agreement between performance in the simulated and actual backgrounds was found, both in the tomosynthesis central projections and the reconstructed images. This tissue model can be used for breast x-ray imaging system optimization. The complete statistical description of the model is provided. (paper)

  9. Use of a hybrid code for global-scale plasma simulation

    International Nuclear Information System (INIS)

    Swift, D.W.

    1996-01-01

    This paper presents a demonstration of the use of a hybrid code to model the Earth's magnetosphere on a global scale. The typical hybrid code calculates the interaction of fully kinetic ions and a massless electron fluid with the magnetic field. This code also includes a fluid ion component to approximate the cold ionospheric plasma that spatially overlaps with the discrete particle component. Other innovative features of the code include a numerically generated curvilinear coordinate system and subcycling of the magnetic field update to the particle push. These innovations allow the code to accommodate disparate time and distance scales. The demonstration is a simulation of the noon meridian plane of the magnetosphere. The code exhibits the formation of fast and slow-mode shocks and tearing reconnection at the magnetopause. New results include particle acceleration in the cusp and nearly field aligned currents linking the cusp and polar ionosphere. The paper also describes a density depletion instability and measures to avoid it. 27 refs., 4 figs

  10. Towards Agent-Based Simulation of Emerging and Large-Scale Social Networks. Examples of the Migrant Crisis and MMORPGs

    Directory of Open Access Journals (Sweden)

    Schatten, Markus

    2016-10-01

    Full Text Available Large-scale agent based simulation of social networks is described in the context of the migrant crisis in Syria and the EU as well as massively multi-player on-line role playing games (MMORPG. The recipeWorld system by Terna and Fontana is proposed as a possible solution to simulating large-scale social networks. The initial system has been re-implemented using the Smart Python multi-Agent Development Environment (SPADE and Pyinteractive was used for visualization. We present initial models of simulation that we plan to develop further in future studies. Thus this paper is research in progress that will hopefully establish a novel agent-based modelling system in the context of the ModelMMORPG project.

  11. Urban Flow and Pollutant Dispersion Simulation with Multi-scale coupling of Meteorological Model with Computational Fluid Dynamic Analysis

    Science.gov (United States)

    Liu, Yushi; Poh, Hee Joo

    2014-11-01

    The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.

  12. Role of cardiolipins in the inner mitochondrial membrane: insight gained through atom-scale simulations

    DEFF Research Database (Denmark)

    Róg, Tomasz; Martinez-Seara, Hector; Munck, Nana

    2009-01-01

    , the exceptional nature of cardiolipins is characterized by their small charged head group connected to typically four hydrocarbon chains. In this work, we present atomic-scale molecular dynamics simulations of the inner mitochondrial membrane modeled as a mixture of cardiolipins (CLs), phosphatidylcholines (PCs...

  13. Multi-scale simulations of field ion microscopy images—Image compression with and without the tip shank

    International Nuclear Information System (INIS)

    NiewieczerzaŁ, Daniel; Oleksy, CzesŁaw; Szczepkowicz, Andrzej

    2012-01-01

    Multi-scale simulations of field ion microscopy images of faceted and hemispherical samples are performed using a 3D model. It is shown that faceted crystals have compressed images even in cases with no shank. The presence of the shank increases the compression of images of faceted crystals quantitatively in the same way as for hemispherical samples. It is hereby proven that the shank does not influence significantly the local, relative variations of the magnification caused by the atomic-scale structure of the sample. -- Highlights: ► Multi-scale simulations of field ion microscopy images. ► Faceted and hemispherical samples with and without shank. ► Shank causes overall compression, but does not influence local magnification effects. ► Image compression linearly increases with the shank angle. ► Shank changes compression of image of faceted tip in the same way as for smooth sample.

  14. Application of the Hybrid Simulation Method for the Full-Scale Precast Reinforced Concrete Shear Wall Structure

    Directory of Open Access Journals (Sweden)

    Zaixian Chen

    2018-02-01

    Full Text Available The hybrid simulation (HS testing method combines physical test and numerical simulation, and provides a viable alternative to evaluate the structural seismic performance. Most studies focused on the accuracy, stability and reliability of the HS method in the small-scale tests. It is a challenge to evaluate the seismic performance of a twelve-story pre-cast reinforced concrete shear-wall structure using this HS method which takes the full-scale bottom three-story structural model as the physical substructure and the elastic non-linear model as the numerical substructure. This paper employs an equivalent force control (EFC method with implicit integration algorithm to deal with the numerical integration of the equation of motion (EOM and the control of the loading device. Because of the arrangement of the test model, an elastic non-linear numerical model is used to simulate the numerical substructure. And non-subdivision strategy for the displacement inflection point of numerical substructure is used to easily realize the simulation of the numerical substructure and thus reduce the measured error. The parameters of the EFC method are calculated basing on analytical and numerical studies and used to the actual full-scale HS test. Finally, the accuracy and feasibility of the EFC-based HS method is verified experimentally through the substructure HS tests of the pre-cast reinforced concrete shear-wall structure model. And the testing results of the descending stage can be conveniently obtained from the EFC-based HS method.

  15. Anatomically detailed and large-scale simulations studying synapse loss and synchrony using NeuroBox

    Directory of Open Access Journals (Sweden)

    Markus eBreit

    2016-02-01

    Full Text Available The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic or reconstruction to the simulation platform UG 4 (which harbors a neuroscientific portfolio and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g. new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations.

  16. Criteria for Scaled Laboratory Simulations of Astrophysical MHD Phenomena

    International Nuclear Information System (INIS)

    Ryutov, D. D.; Drake, R. P.; Remington, B. A.

    2000-01-01

    We demonstrate that two systems described by the equations of the ideal magnetohydrodynamics (MHD) evolve similarly, if the initial conditions are geometrically similar and certain scaling relations hold. The thermodynamic properties of the gas must be such that the internal energy density is proportional to the pressure. The presence of the shocks is allowed. We discuss the applicability conditions of the ideal MHD and demonstrate that they are satisfied with a large margin both in a number of astrophysical objects, and in properly designed simulation experiments with high-power lasers. This allows one to perform laboratory experiments whose results can be used for quantitative interpretation of various effects of astrophysical MHD. (c) 2000 The American Astronomical Society

  17. The gyro-radius scaling of ion thermal transport from global numerical simulations of ITG turbulence

    International Nuclear Information System (INIS)

    Ottaviani, M.; Manfredi, G.

    1998-12-01

    A three-dimensional, fluid code is used to study the scaling of ion thermal transport caused by Ion-Temperature-Gradient-Driven (ITG) turbulence. The code includes toroidal effects and is capable of simulating the whole torus. It is found that both close to the ITG threshold and well above threshold, the thermal transport and the turbulence structures exhibit a gyro-Bohm scaling, at least for plasmas with moderate poloidal flow. (author)

  18. Adaptive resolution simulation of salt solutions

    International Nuclear Information System (INIS)

    Bevc, Staš; Praprotnik, Matej; Junghans, Christoph; Kremer, Kurt

    2013-01-01

    We present an adaptive resolution simulation of aqueous salt (NaCl) solutions at ambient conditions using the adaptive resolution scheme. Our multiscale approach concurrently couples the atomistic and coarse-grained models of the aqueous NaCl, where water molecules and ions change their resolution while moving from one resolution domain to the other. We employ standard extended simple point charge (SPC/E) and simple point charge (SPC) water models in combination with AMBER and GROMOS force fields for ion interactions in the atomistic domain. Electrostatics in our model are described by the generalized reaction field method. The effective interactions for water–water and water–ion interactions in the coarse-grained model are derived using structure-based coarse-graining approach while the Coulomb interactions between ions are appropriately screened. To ensure an even distribution of water molecules and ions across the simulation box we employ thermodynamic forces. We demonstrate that the equilibrium structural, e.g. radial distribution functions and density distributions of all the species, and dynamical properties are correctly reproduced by our adaptive resolution method. Our multiscale approach, which is general and can be used for any classical non-polarizable force-field and/or types of ions, will significantly speed up biomolecular simulation involving aqueous salt. (paper)

  19. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    Science.gov (United States)

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  20. Single-Cell Biomolecular Analysis of Coral Algal Symbionts Reveals Opposing Metabolic Responses to Heat Stress and Expulsion

    Directory of Open Access Journals (Sweden)

    Katherina Petrou

    2018-03-01

    Full Text Available The success of corals in nutrient poor environments is largely attributed to the symbiosis between the cnidarian host and its intracellular alga. Warm water anomalies have been shown to destabilize this symbiosis, yet detailed analysis of the effect of temperature and expulsion on cell-specific carbon and nutrient allocation in the symbiont is limited. Here, we exposed colonies of the hard coral Acropora millepora to heat stress and using synchrotron-based infrared microspectroscopy measured the biomolecular profiles of individual in hospite and expelled symbiont cells at an acute state of bleaching. Our results showed symbiont metabolic profiles to be remarkably distinct with heat stress and expulsion, where the two effectors elicited opposing metabolic adjustments independent of treatment or cell type. Elevated temperature resulted in biomolecular changes reflecting cellular stress, with relative increases in free amino acids and phosphorylation of molecules and a concomitant decline in protein content, suggesting protein modification and degradation. This contrasted with the metabolic profiles of expelled symbionts, which showed relative decreases in free amino acids and phosphorylated molecules, but increases in proteins and lipids, suggesting expulsion lessens the overall effect of heat stress on the metabolic signature of the algal symbionts. Interestingly, the combined effects of expulsion and thermal stress were additive, reducing the overall shifts in all biomolecules, with the notable exception of the significant accumulation of lipids and saturated fatty acids. This first use of a single-cell metabolomics approach on the coral symbiosis provides novel insight into coral bleaching and emphasizes the importance of a single-cell approach to demark the cell-to-cell variability in the physiology of coral cellular populations.