WorldWideScience

Sample records for scale biomolecular simulations

  1. Biomolecular modelling and simulations

    CERN Document Server

    Karabencheva-Christova, Tatyana

    2014-01-01

    Published continuously since 1944, the Advances in Protein Chemistry and Structural Biology series is the essential resource for protein chemists. Each volume brings forth new information about protocols and analysis of proteins. Each thematically organized volume is guest edited by leading experts in a broad range of protein-related topics. Describes advances in biomolecular modelling and simulations Chapters are written by authorities in their field Targeted to a wide audience of researchers, specialists, and students The information provided in the volume is well supported by a number of high quality illustrations, figures, and tables.

  2. Stochastic Simulation of Biomolecular Reaction Networks Using the Biomolecular Network Simulator Software

    National Research Council Canada - National Science Library

    Frazier, John; Chusak, Yaroslav; Foy, Brent

    2008-01-01

    .... The software uses either exact or approximate stochastic simulation algorithms for generating Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks...

  3. Biomolecular simulations on petascale: promises and challenges

    International Nuclear Information System (INIS)

    Agarwal, Pratul K; Alam, Sadaf R

    2006-01-01

    Proteins work as highly efficient machines at the molecular level and are responsible for a variety of processes in all living cells. There is wide interest in understanding these machines for implications in biochemical/biotechnology industries as well as in health related fields. Over the last century, investigations of proteins based on a variety of experimental techniques have provided a wealth of information. More recently, theoretical and computational modeling using large scale simulations is providing novel insights into the functioning of these machines. The next generation supercomputers with petascale computing power, hold great promises as well as challenges for the biomolecular simulation scientists. We briefly discuss the progress being made in this area

  4. Biomolecular simulation: historical picture and future perspectives.

    Science.gov (United States)

    van Gunsteren, Wilfred F; Dolenc, Jozica

    2008-02-01

    Over the last 30 years, computation based on molecular models is playing an increasingly important role in biology, biological chemistry and biophysics. Since only a very limited number of properties of biomolecular systems are actually accessible to measurement by experimental means, computer simulation complements experiments by providing not only averages, but also distributions and time series of any definable, observable or non-observable, quantity. Biomolecular simulation may be used (i) to interpret experimental data, (ii) to provoke new experiments, (iii) to replace experiments and (iv) to protect intellectual property. Progress over the last 30 years is sketched and perspectives are outlined for the future.

  5. Development of an informatics infrastructure for data exchange of biomolecular simulations: Architecture, data models and ontology.

    Science.gov (United States)

    Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.

  6. GROMOS++Software for the Analysis of Biomolecular Simulation Trajectories

    NARCIS (Netherlands)

    Eichenberger, A.P.; Allison, J.R.; Dolenc, J.; Geerke, D.P.; Horta, B.A.C.; Meier, K; Oostenbrink, B.C.; Schmid, N.; Steiner, D; Wang, D.; van Gunsteren, W.F.

    2011-01-01

    GROMOS++ is a set of C++ programs for pre- and postprocessing of molecular dynamics simulation trajectories and as such is part of the GROningen MOlecular Simulation software for (bio)molecular simulation. It contains more than 70 programs that can be used to prepare data for the production of

  7. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

    Science.gov (United States)

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-07-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

  8. Biomolecular structure refinement using the GROMOS simulation software

    International Nuclear Information System (INIS)

    Schmid, Nathan; Allison, Jane R.; Dolenc, Jožica; Eichenberger, Andreas P.; Kunz, Anna-Pitschna E.; Gunsteren, Wilfred F. van

    2011-01-01

    For the understanding of cellular processes the molecular structure of biomolecules has to be accurately determined. Initial models can be significantly improved by structure refinement techniques. Here, we present the refinement methods and analysis techniques implemented in the GROMOS software for biomolecular simulation. The methodology and some implementation details of the computation of NMR NOE data, 3 J-couplings and residual dipolar couplings, X-ray scattering intensities from crystals and solutions and neutron scattering intensities used in GROMOS is described and refinement strategies and concepts are discussed using example applications. The GROMOS software allows structure refinement combining different types of experimental data with different types of restraining functions, while using a variety of methods to enhance conformational searching and sampling and the thermodynamically calibrated GROMOS force field for biomolecular simulation.

  9. Biomolecular structure refinement using the GROMOS simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Schmid, Nathan; Allison, Jane R.; Dolenc, Jozica; Eichenberger, Andreas P.; Kunz, Anna-Pitschna E.; Gunsteren, Wilfred F. van, E-mail: wfvgn@igc.phys.chem.ethz.ch [Swiss Federal Institute of Technology ETH, Laboratory of Physical Chemistry (Switzerland)

    2011-11-15

    For the understanding of cellular processes the molecular structure of biomolecules has to be accurately determined. Initial models can be significantly improved by structure refinement techniques. Here, we present the refinement methods and analysis techniques implemented in the GROMOS software for biomolecular simulation. The methodology and some implementation details of the computation of NMR NOE data, {sup 3}J-couplings and residual dipolar couplings, X-ray scattering intensities from crystals and solutions and neutron scattering intensities used in GROMOS is described and refinement strategies and concepts are discussed using example applications. The GROMOS software allows structure refinement combining different types of experimental data with different types of restraining functions, while using a variety of methods to enhance conformational searching and sampling and the thermodynamically calibrated GROMOS force field for biomolecular simulation.

  10. Application of Hidden Markov Models in Biomolecular Simulations.

    Science.gov (United States)

    Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar

    2017-01-01

    Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.

  11. ANCA: Anharmonic Conformational Analysis of Biomolecular Simulations.

    Science.gov (United States)

    Parvatikar, Akash; Vacaliuc, Gabriel S; Ramanathan, Arvind; Chennubhotla, S Chakra

    2018-05-08

    Anharmonicity in time-dependent conformational fluctuations is noted to be a key feature of functional dynamics of biomolecules. Although anharmonic events are rare, long-timescale (μs-ms and beyond) simulations facilitate probing of such events. We have previously developed quasi-anharmonic analysis to resolve higher-order spatial correlations and characterize anharmonicity in biomolecular simulations. In this article, we have extended this toolbox to resolve higher-order temporal correlations and built a scalable Python package called anharmonic conformational analysis (ANCA). ANCA has modules to: 1) measure anharmonicity in the form of higher-order statistics and its variation as a function of time, 2) output a storyboard representation of the simulations to identify key anharmonic conformational events, and 3) identify putative anharmonic conformational substates and visualization of transitions between these substates. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. A fast mollified impulse method for biomolecular atomistic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)

    2017-03-15

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.

  13. Theoretical restrictions on longest implicit time scales in Markov state models of biomolecular dynamics

    Science.gov (United States)

    Sinitskiy, Anton V.; Pande, Vijay S.

    2018-01-01

    Markov state models (MSMs) have been widely used to analyze computer simulations of various biomolecular systems. They can capture conformational transitions much slower than an average or maximal length of a single molecular dynamics (MD) trajectory from the set of trajectories used to build the MSM. A rule of thumb claiming that the slowest implicit time scale captured by an MSM should be comparable by the order of magnitude to the aggregate duration of all MD trajectories used to build this MSM has been known in the field. However, this rule has never been formally proved. In this work, we present analytical results for the slowest time scale in several types of MSMs, supporting the above rule. We conclude that the slowest implicit time scale equals the product of the aggregate sampling and four factors that quantify: (1) how much statistics on the conformational transitions corresponding to the longest implicit time scale is available, (2) how good the sampling of the destination Markov state is, (3) the gain in statistics from using a sliding window for counting transitions between Markov states, and (4) a bias in the estimate of the implicit time scale arising from finite sampling of the conformational transitions. We demonstrate that in many practically important cases all these four factors are on the order of unity, and we analyze possible scenarios that could lead to their significant deviation from unity. Overall, we provide for the first time analytical results on the slowest time scales captured by MSMs. These results can guide further practical applications of MSMs to biomolecular dynamics and allow for higher computational efficiency of simulations.

  14. Modeling, Analysis, Simulation, and Synthesis of Biomolecular Networks

    National Research Council Canada - National Science Library

    Ruben, Harvey; Kumar, Vijay; Sokolsky, Oleg

    2006-01-01

    ...) a first example of reachability analysis applied to a biomolecular system (lactose induction), 4) a model of tetracycline resistance that discriminates between two possible mechanisms for tetracycline diffusion through the cell membrane, and 5...

  15. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    Science.gov (United States)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  16. RPYFMM: Parallel adaptive fast multipole method for Rotne-Prager-Yamakawa tensor in biomolecular hydrodynamics simulations

    Science.gov (United States)

    Guan, W.; Cheng, X.; Huang, J.; Huber, G.; Li, W.; McCammon, J. A.; Zhang, B.

    2018-06-01

    RPYFMM is a software package for the efficient evaluation of the potential field governed by the Rotne-Prager-Yamakawa (RPY) tensor interactions in biomolecular hydrodynamics simulations. In our algorithm, the RPY tensor is decomposed as a linear combination of four Laplace interactions, each of which is evaluated using the adaptive fast multipole method (FMM) (Greengard and Rokhlin, 1997) where the exponential expansions are applied to diagonalize the multipole-to-local translation operators. RPYFMM offers a unified execution on both shared and distributed memory computers by leveraging the DASHMM library (DeBuhr et al., 2016, 2018). Preliminary numerical results show that the interactions for a molecular system of 15 million particles (beads) can be computed within one second on a Cray XC30 cluster using 12,288 cores, while achieving approximately 54% strong-scaling efficiency.

  17. Electrostatics in biomolecular simulations : where are we now and where are we heading?

    NARCIS (Netherlands)

    Karttunen, M.E.J.; Rottler, J.; Vattulainen, I.; Sagui, C.

    2008-01-01

    Chapter 2. In this review, we discuss current methods and developments in the treatment of electrostatic interactions in biomolecular and soft matter simulations. We review the current ‘work horses’, namely, Ewald summation based methods such the Particle-Mesh Ewald, and others, and also newer

  18. Biochemical Stability Analysis of Nano Scaled Contrast Agents Used in Biomolecular Imaging Detection of Tumor Cells

    Science.gov (United States)

    Kim, Jennifer; Kyung, Richard

    Imaging contrast agents are materials used to improve the visibility of internal body structures in the imaging process. Many agents that are used for contrast enhancement are now studied empirically and computationally by researchers. Among various imaging techniques, magnetic resonance imaging (MRI) has become a major diagnostic tool in many clinical specialties due to its non-invasive characteristic and its safeness in regards to ionizing radiation exposure. Recently, researchers have prepared aqueous fullerene nanoparticles using electrochemical methods. In this paper, computational simulations of thermodynamic stabilities of nano scaled contrast agents that can be used in biomolecular imaging detection of tumor cells are presented using nanomaterials such as fluorescent functionalized fullerenes. In addition, the stability and safety of different types of contrast agents composed of metal oxide a, b, and c are tested in the imaging process. Through analysis of the computational simulations, the stabilities of the contrast agents, determined by optimized energies of the conformations, are presented. The resulting numerical data are compared. In addition, Density Functional Theory (DFT) is used in order to model the electron properties of the compound.

  19. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    Science.gov (United States)

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  20. SPATKIN: a simulator for rule-based modeling of biomolecular site dynamics on surfaces.

    Science.gov (United States)

    Kochanczyk, Marek; Hlavacek, William S; Lipniacki, Tomasz

    2017-11-15

    Rule-based modeling is a powerful approach for studying biomolecular site dynamics. Here, we present SPATKIN, a general-purpose simulator for rule-based modeling in two spatial dimensions. The simulation algorithm is a lattice-based method that tracks Brownian motion of individual molecules and the stochastic firing of rule-defined reaction events. Because rules are used as event generators, the algorithm is network-free, meaning that it does not require to generate the complete reaction network implied by rules prior to simulation. In a simulation, each molecule (or complex of molecules) is taken to occupy a single lattice site that cannot be shared with another molecule (or complex). SPATKIN is capable of simulating a wide array of membrane-associated processes, including adsorption, desorption and crowding. Models are specified using an extension of the BioNetGen language, which allows to account for spatial features of the simulated process. The C ++ source code for SPATKIN is distributed freely under the terms of the GNU GPLv3 license. The source code can be compiled for execution on popular platforms (Windows, Mac and Linux). An installer for 64-bit Windows and a macOS app are available. The source code and precompiled binaries are available at the SPATKIN Web site (http://pmbm.ippt.pan.pl/software/spatkin). spatkin.simulator@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  1. A coarse-grained model for the simulations of biomolecular interactions in cellular environments

    International Nuclear Information System (INIS)

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2014-01-01

    The interactions of bio-molecules constitute the key steps of cellular functions. However, in vivo binding properties differ significantly from their in vitro measurements due to the heterogeneity of cellular environments. Here we introduce a coarse-grained model based on rigid-body representation to study how factors such as cellular crowding and membrane confinement affect molecular binding. The macroscopic parameters such as the equilibrium constant and the kinetic rate constant are calibrated by adjusting the microscopic coefficients used in the numerical simulations. By changing these model parameters that are experimentally approachable, we are able to study the kinetic and thermodynamic properties of molecular binding, as well as the effects caused by specific cellular environments. We investigate the volumetric effects of crowded intracellular space on bio-molecular diffusion and diffusion-limited reactions. Furthermore, the binding constants of membrane proteins are currently difficult to measure. We provide quantitative estimations about how the binding of membrane proteins deviates from soluble proteins under different degrees of membrane confinements. The simulation results provide biological insights to the functions of membrane receptors on cell surfaces. Overall, our studies establish a connection between the details of molecular interactions and the heterogeneity of cellular environments

  2. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions.

    Science.gov (United States)

    Lu, Benzhuo; Holst, Michael J; McCammon, J Andrew; Zhou, Y C

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.

  3. Sop-GPU: accelerating biomolecular simulations in the centisecond timescale using graphics processors.

    Science.gov (United States)

    Zhmurov, A; Dima, R I; Kholodov, Y; Barsegov, V

    2010-11-01

    Theoretical exploration of fundamental biological processes involving the forced unraveling of multimeric proteins, the sliding motion in protein fibers and the mechanical deformation of biomolecular assemblies under physiological force loads is challenging even for distributed computing systems. Using a C(α)-based coarse-grained self organized polymer (SOP) model, we implemented the Langevin simulations of proteins on graphics processing units (SOP-GPU program). We assessed the computational performance of an end-to-end application of the program, where all the steps of the algorithm are running on a GPU, by profiling the simulation time and memory usage for a number of test systems. The ∼90-fold computational speedup on a GPU, compared with an optimized central processing unit program, enabled us to follow the dynamics in the centisecond timescale, and to obtain the force-extension profiles using experimental pulling speeds (v(f) = 1-10 μm/s) employed in atomic force microscopy and in optical tweezers-based dynamic force spectroscopy. We found that the mechanical molecular response critically depends on the conditions of force application and that the kinetics and pathways for unfolding change drastically even upon a modest 10-fold increase in v(f). This implies that, to resolve accurately the free energy landscape and to relate the results of single-molecule experiments in vitro and in silico, molecular simulations should be carried out under the experimentally relevant force loads. This can be accomplished in reasonable wall-clock time for biomolecules of size as large as 10(5) residues using the SOP-GPU package. © 2010 Wiley-Liss, Inc.

  4. Optimal use of data in parallel tempering simulations for the construction of discrete-state Markov models of biomolecular dynamics.

    Science.gov (United States)

    Prinz, Jan-Hendrik; Chodera, John D; Pande, Vijay S; Swope, William C; Smith, Jeremy C; Noé, Frank

    2011-06-28

    Parallel tempering (PT) molecular dynamics simulations have been extensively investigated as a means of efficient sampling of the configurations of biomolecular systems. Recent work has demonstrated how the short physical trajectories generated in PT simulations of biomolecules can be used to construct the Markov models describing biomolecular dynamics at each simulated temperature. While this approach describes the temperature-dependent kinetics, it does not make optimal use of all available PT data, instead estimating the rates at a given temperature using only data from that temperature. This can be problematic, as some relevant transitions or states may not be sufficiently sampled at the temperature of interest, but might be readily sampled at nearby temperatures. Further, the comparison of temperature-dependent properties can suffer from the false assumption that data collected from different temperatures are uncorrelated. We propose here a strategy in which, by a simple modification of the PT protocol, the harvested trajectories can be reweighted, permitting data from all temperatures to contribute to the estimated kinetic model. The method reduces the statistical uncertainty in the kinetic model relative to the single temperature approach and provides estimates of transition probabilities even for transitions not observed at the temperature of interest. Further, the method allows the kinetics to be estimated at temperatures other than those at which simulations were run. We illustrate this method by applying it to the generation of a Markov model of the conformational dynamics of the solvated terminally blocked alanine peptide.

  5. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    Science.gov (United States)

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.

  6. Converting biomolecular modelling data based on an XML representation.

    Science.gov (United States)

    Sun, Yudong; McKeever, Steve

    2008-08-25

    Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  7. Exploring Biomolecular Interactions Through Single-Molecule Force Spectroscopy and Computational Simulation

    OpenAIRE

    Yang, Darren

    2016-01-01

    Molecular interactions between cellular components such as proteins and nucleic acids govern the fundamental processes of living systems. Technological advancements in the past decade have allowed the characterization of these molecular interactions at the single-molecule level with high temporal and spatial resolution. Simultaneously, progress in computer simulation has enabled theoretical research at the atomistic level, assisting in the interpretation of experimental results. This thesi...

  8. H++ 3.0: automating pK prediction and the preparation of biomolecular structures for atomistic molecular modeling and simulations.

    Science.gov (United States)

    Anandakrishnan, Ramu; Aguilar, Boris; Onufriev, Alexey V

    2012-07-01

    The accuracy of atomistic biomolecular modeling and simulation studies depend on the accuracy of the input structures. Preparing these structures for an atomistic modeling task, such as molecular dynamics (MD) simulation, can involve the use of a variety of different tools for: correcting errors, adding missing atoms, filling valences with hydrogens, predicting pK values for titratable amino acids, assigning predefined partial charges and radii to all atoms, and generating force field parameter/topology files for MD. Identifying, installing and effectively using the appropriate tools for each of these tasks can be difficult for novice and time-consuming for experienced users. H++ (http://biophysics.cs.vt.edu/) is a free open-source web server that automates the above key steps in the preparation of biomolecular structures for molecular modeling and simulations. H++ also performs extensive error and consistency checking, providing error/warning messages together with the suggested corrections. In addition to numerous minor improvements, the latest version of H++ includes several new capabilities and options: fix erroneous (flipped) side chain conformations for HIS, GLN and ASN, include a ligand in the input structure, process nucleic acid structures and generate a solvent box with specified number of common ions for explicit solvent MD.

  9. Biomolecular Science (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2012-04-01

    A brief fact sheet about NREL Photobiology and Biomolecular Science. The research goal of NREL's Biomolecular Science is to enable cost-competitive advanced lignocellulosic biofuels production by understanding the science critical for overcoming biomass recalcitrance and developing new product and product intermediate pathways. NREL's Photobiology focuses on understanding the capture of solar energy in photosynthetic systems and its use in converting carbon dioxide and water directly into hydrogen and advanced biofuels.

  10. Perspective: Markov models for long-timescale biomolecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Schwantes, C. R.; McGibbon, R. T. [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Pande, V. S., E-mail: pande@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Department of Computer Science, Stanford University, Stanford, California 94305 (United States); Department of Structural Biology, Stanford University, Stanford, California 94305 (United States); Biophysics Program, Stanford University, Stanford, California 94305 (United States)

    2014-09-07

    Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.

  11. Perspective: Markov models for long-timescale biomolecular dynamics

    International Nuclear Information System (INIS)

    Schwantes, C. R.; McGibbon, R. T.; Pande, V. S.

    2014-01-01

    Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics

  12. Converting Biomolecular Modelling Data Based on an XML Representation

    Directory of Open Access Journals (Sweden)

    Sun Yudong

    2008-06-01

    Full Text Available Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language. BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  13. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  14. Biomolecular condensates: organizers of cellular biochemistry.

    Science.gov (United States)

    Banani, Salman F; Lee, Hyun O; Hyman, Anthony A; Rosen, Michael K

    2017-05-01

    Biomolecular condensates are micron-scale compartments in eukaryotic cells that lack surrounding membranes but function to concentrate proteins and nucleic acids. These condensates are involved in diverse processes, including RNA metabolism, ribosome biogenesis, the DNA damage response and signal transduction. Recent studies have shown that liquid-liquid phase separation driven by multivalent macromolecular interactions is an important organizing principle for biomolecular condensates. With this physical framework, it is now possible to explain how the assembly, composition, physical properties and biochemical and cellular functions of these important structures are regulated.

  15. Prediction of Biomolecular Complexes

    KAUST Repository

    Vangone, Anna

    2017-04-12

    Almost all processes in living organisms occur through specific interactions between biomolecules. Any dysfunction of those interactions can lead to pathological events. Understanding such interactions is therefore a crucial step in the investigation of biological systems and a starting point for drug design. In recent years, experimental studies have been devoted to unravel the principles of biomolecular interactions; however, due to experimental difficulties in solving the three-dimensional (3D) structure of biomolecular complexes, the number of available, high-resolution experimental 3D structures does not fulfill the current needs. Therefore, complementary computational approaches to model such interactions are necessary to assist experimentalists since a full understanding of how biomolecules interact (and consequently how they perform their function) only comes from 3D structures which provide crucial atomic details about binding and recognition processes. In this chapter we review approaches to predict biomolecular complexesBiomolecular complexes, introducing the concept of molecular dockingDocking, a technique which uses a combination of geometric, steric and energetics considerations to predict the 3D structure of a biological complex starting from the individual structures of its constituent parts. We provide a mini-guide about docking concepts, its potential and challenges, along with post-docking analysis and a list of related software.

  16. Prediction of Biomolecular Complexes

    KAUST Repository

    Vangone, Anna; Oliva, Romina; Cavallo, Luigi; Bonvin, Alexandre M. J. J.

    2017-01-01

    Almost all processes in living organisms occur through specific interactions between biomolecules. Any dysfunction of those interactions can lead to pathological events. Understanding such interactions is therefore a crucial step in the investigation of biological systems and a starting point for drug design. In recent years, experimental studies have been devoted to unravel the principles of biomolecular interactions; however, due to experimental difficulties in solving the three-dimensional (3D) structure of biomolecular complexes, the number of available, high-resolution experimental 3D structures does not fulfill the current needs. Therefore, complementary computational approaches to model such interactions are necessary to assist experimentalists since a full understanding of how biomolecules interact (and consequently how they perform their function) only comes from 3D structures which provide crucial atomic details about binding and recognition processes. In this chapter we review approaches to predict biomolecular complexesBiomolecular complexes, introducing the concept of molecular dockingDocking, a technique which uses a combination of geometric, steric and energetics considerations to predict the 3D structure of a biological complex starting from the individual structures of its constituent parts. We provide a mini-guide about docking concepts, its potential and challenges, along with post-docking analysis and a list of related software.

  17. ORAC: a molecular dynamics simulation program to explore free energy surfaces in biomolecular systems at the atomistic level.

    Science.gov (United States)

    Marsili, Simone; Signorini, Giorgio Federico; Chelli, Riccardo; Marchi, Massimo; Procacci, Piero

    2010-04-15

    We present the new release of the ORAC engine (Procacci et al., Comput Chem 1997, 18, 1834), a FORTRAN suite to simulate complex biosystems at the atomistic level. The previous release of the ORAC code included multiple time steps integration, smooth particle mesh Ewald method, constant pressure and constant temperature simulations. The present release has been supplemented with the most advanced techniques for enhanced sampling in atomistic systems including replica exchange with solute tempering, metadynamics and steered molecular dynamics. All these computational technologies have been implemented for parallel architectures using the standard MPI communication protocol. ORAC is an open-source program distributed free of charge under the GNU general public license (GPL) at http://www.chim.unifi.it/orac. 2009 Wiley Periodicals, Inc.

  18. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We identify a number of common features in programming that seem...... conspicuously absent from the literature on biomolecular computing; to partially redress this absence, we introduce a model of computation that is evidently programmable, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined...... by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways...

  19. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  20. Thermodynamic properties of water solvating biomolecular surfaces

    Science.gov (United States)

    Heyden, Matthias

    Changes in the potential energy and entropy of water molecules hydrating biomolecular interfaces play a significant role for biomolecular solubility and association. Free energy perturbation and thermodynamic integration methods allow calculations of free energy differences between two states from simulations. However, these methods are computationally demanding and do not provide insights into individual thermodynamic contributions, i.e. changes in the solvent energy or entropy. Here, we employ methods to spatially resolve distributions of hydration water thermodynamic properties in the vicinity of biomolecular surfaces. This allows direct insights into thermodynamic signatures of the hydration of hydrophobic and hydrophilic solvent accessible sites of proteins and small molecules and comparisons to ideal model surfaces. We correlate dynamic properties of hydration water molecules, i.e. translational and rotational mobility, to their thermodynamics. The latter can be used as a guide to extract thermodynamic information from experimental measurements of site-resolved water dynamics. Further, we study energy-entropy compensations of water at different hydration sites of biomolecular surfaces. This work is supported by the Cluster of Excellence RESOLV (EXC 1069) funded by the Deutsche Forschungsgemeinschaft.

  1. Biomolecular engineering for nanobio/bionanotechnology

    Science.gov (United States)

    Nagamune, Teruyuki

    2017-04-01

    Biomolecular engineering can be used to purposefully manipulate biomolecules, such as peptides, proteins, nucleic acids and lipids, within the framework of the relations among their structures, functions and properties, as well as their applicability to such areas as developing novel biomaterials, biosensing, bioimaging, and clinical diagnostics and therapeutics. Nanotechnology can also be used to design and tune the sizes, shapes, properties and functionality of nanomaterials. As such, there are considerable overlaps between nanotechnology and biomolecular engineering, in that both are concerned with the structure and behavior of materials on the nanometer scale or smaller. Therefore, in combination with nanotechnology, biomolecular engineering is expected to open up new fields of nanobio/bionanotechnology and to contribute to the development of novel nanobiomaterials, nanobiodevices and nanobiosystems. This review highlights recent studies using engineered biological molecules (e.g., oligonucleotides, peptides, proteins, enzymes, polysaccharides, lipids, biological cofactors and ligands) combined with functional nanomaterials in nanobio/bionanotechnology applications, including therapeutics, diagnostics, biosensing, bioanalysis and biocatalysts. Furthermore, this review focuses on five areas of recent advances in biomolecular engineering: (a) nucleic acid engineering, (b) gene engineering, (c) protein engineering, (d) chemical and enzymatic conjugation technologies, and (e) linker engineering. Precisely engineered nanobiomaterials, nanobiodevices and nanobiosystems are anticipated to emerge as next-generation platforms for bioelectronics, biosensors, biocatalysts, molecular imaging modalities, biological actuators, and biomedical applications.

  2. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  3. Biomolecular Sciences: uniting Biology and Chemistry

    NARCIS (Netherlands)

    Vrieling, Engel

    2017-01-01

    Biomolecular Sciences: uniting Biology and Chemistry www.rug.nl/research/gbb The scientific discoveries in biomolecular sciences have benefitted enormously from technological innovations. At the Groningen Biomolecular Science and Biotechnology Institute (GBB) we now sequence a genome in days,

  4. Biomolecular EPR spectroscopy

    CERN Document Server

    Hagen, Wilfred Raymond

    2008-01-01

    Comprehensive, Up-to-Date Coverage of Spectroscopy Theory and its Applications to Biological SystemsAlthough a multitude of books have been published about spectroscopy, most of them only occasionally refer to biological systems and the specific problems of biomolecular EPR (bioEPR). Biomolecular EPR Spectroscopy provides a practical introduction to bioEPR and demonstrates how this remarkable tool allows researchers to delve into the structural, functional, and analytical analysis of paramagnetic molecules found in the biochemistry of all species on the planet. A Must-Have Reference in an Intrinsically Multidisciplinary FieldThis authoritative reference seamlessly covers all important bioEPR applications, including low-spin and high-spin metalloproteins, spin traps and spin lables, interaction between active sites, and redox systems. It is loaded with practical tricks as well as do's and don'ts that are based on the author's 30 years of experience in the field. The book also comes with an unprecedented set of...

  5. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  6. Scalable Molecular Dynamics for Large Biomolecular Systems

    Directory of Open Access Journals (Sweden)

    Robert K. Brunner

    2000-01-01

    Full Text Available We present an optimized parallelization scheme for molecular dynamics simulations of large biomolecular systems, implemented in the production-quality molecular dynamics program NAMD. With an object-based hybrid force and spatial decomposition scheme, and an aggressive measurement-based predictive load balancing framework, we have attained speeds and speedups that are much higher than any reported in literature so far. The paper first summarizes the broad methodology we are pursuing, and the basic parallelization scheme we used. It then describes the optimizations that were instrumental in increasing performance, and presents performance results on benchmark simulations.

  7. Biomolecular electrostatics and solvation: a computational perspective.

    Science.gov (United States)

    Ren, Pengyu; Chun, Jaehun; Thomas, Dennis G; Schnieders, Michael J; Marucho, Marcelo; Zhang, Jiajing; Baker, Nathan A

    2012-11-01

    An understanding of molecular interactions is essential for insight into biological systems at the molecular scale. Among the various components of molecular interactions, electrostatics are of special importance because of their long-range nature and their influence on polar or charged molecules, including water, aqueous ions, proteins, nucleic acids, carbohydrates, and membrane lipids. In particular, robust models of electrostatic interactions are essential for understanding the solvation properties of biomolecules and the effects of solvation upon biomolecular folding, binding, enzyme catalysis, and dynamics. Electrostatics, therefore, are of central importance to understanding biomolecular structure and modeling interactions within and among biological molecules. This review discusses the solvation of biomolecules with a computational biophysics view toward describing the phenomenon. While our main focus lies on the computational aspect of the models, we provide an overview of the basic elements of biomolecular solvation (e.g. solvent structure, polarization, ion binding, and non-polar behavior) in order to provide a background to understand the different types of solvation models.

  8. Molecular Dynamics Simulations of Kinetic Models for Chiral Dominance in Soft Condensed Matter

    DEFF Research Database (Denmark)

    Toxvaerd, Søren

    2001-01-01

    Molecular dynamics simulation, models for isomerization kinetics, origin of biomolecular chirality......Molecular dynamics simulation, models for isomerization kinetics, origin of biomolecular chirality...

  9. Simulations of atomic-scale sliding friction

    DEFF Research Database (Denmark)

    Sørensen, Mads Reinholdt; Jacobsen, Karsten Wedel; Stoltze, Per

    1996-01-01

    Simulation studies of atomic-scale sliding friction have been performed for a number of tip-surface and surface-surface contacts consisting of copper atoms. Both geometrically very simple tip-surface structures and more realistic interface necks formed by simulated annealing have been studied....... Kinetic friction is observed to be caused by atomic-scale Stick and slip which occurs by nucleation and subsequent motion of dislocations preferably between close-packed {111} planes. Stick and slip seems ro occur in different situations. For single crystalline contacts without grain boundaries...... pinning of atoms near the boundary of the interface and is therefore more easily observed for smaller contacts. Depending on crystal orientation and load, frictional wear can also be seen in the simulations. In particular, for the annealed interface-necks which model contacts created by scanning tunneling...

  10. From dynamics to structure and function of model biomolecular systems

    NARCIS (Netherlands)

    Fontaine-Vive-Curtaz, F.

    2007-01-01

    The purpose of this thesis was to extend recent works on structure and dynamics of hydrogen bonded crystals to model biomolecular systems and biological processes. The tools that we have used are neutron scattering (NS) and density functional theory (DFT) and force field (FF) based simulation

  11. Laser photodissociation and spectroscopy of mass-separated biomolecular ions

    CERN Document Server

    Polfer, Nicolas C

    2014-01-01

    This lecture notes book presents how enhanced structural information of biomolecular ions can be obtained from interaction with photons of specific frequency - laser light. The methods described in the book ""Laser photodissociation and spectroscopy of mass-separated biomolecular ions"" make use of the fact that the discrete energy and fast time scale of photoexcitation can provide more control in ion activation. This activation is the crucial process producing structure-informative product ions that cannot be generated with more conventional heating methods, such as collisional activation. Th

  12. Computer simulations for the nano-scale

    International Nuclear Information System (INIS)

    Stich, I.

    2007-01-01

    A review of methods for computations for the nano-scale is presented. The paper should provide a convenient starting point into computations for the nano-scale as well as a more in depth presentation for those already working in the field of atomic/molecular-scale modeling. The argument is divided in chapters covering the methods for description of the (i) electrons, (ii) ions, and (iii) techniques for efficient solving of the underlying equations. A fairly broad view is taken covering the Hartree-Fock approximation, density functional techniques and quantum Monte-Carlo techniques for electrons. The customary quantum chemistry methods, such as post Hartree-Fock techniques, are only briefly mentioned. Description of both classical and quantum ions is presented. The techniques cover Ehrenfest, Born-Oppenheimer, and Car-Parrinello dynamics. The strong and weak points of both principal and technical nature are analyzed. In the second part we introduce a number of applications to demonstrate the different approximations and techniques introduced in the first part. They cover a wide range of applications such as non-simple liquids, surfaces, molecule-surface interactions, applications in nano technology, etc. These more in depth presentations, while certainly not exhaustive, should provide information on technical aspects of the simulations, typical parameters used, and ways of analysis of the huge amounts of data generated in these large-scale supercomputer simulations. (author)

  13. Spin valve sensor for biomolecular identification: Design, fabrication, and characterization

    Science.gov (United States)

    Li, Guanxiong

    Biomolecular identification, e.g., DNA recognition, has broad applications in biology and medicine such as gene expression analysis, disease diagnosis, and DNA fingerprinting. Therefore, we have been developing a magnetic biodetection technology based on giant magnetoresistive spin valve sensors and magnetic nanoparticle (developed for the magnetic nanoparticle detection, assuming the equivalent average field of magnetic nanoparticles and the coherent rotation of spin valve free layer magnetization. Micromagnetic simulations have also been performed for the spin valve sensors. The analytical model and micromagnetic simulations are found consistent with each other and are in good agreement with experiments. The prototype spin valve sensors have been fabricated at both micron and submicron scales. We demonstrated the detection of a single 2.8-mum magnetic microbead by micron-sized spin valve sensors. Based on polymer-mediated self-assembly and fine lithography, a bilayer lift-off process was developed to deposit magnetic nanoparticles onto the sensor surface in a controlled manner. With the lift-off deposition method, we have successfully demonstrated the room temperature detection of monodisperse 16-nm Fe3O 4 nanoparticles in a quantity from a few tens to several hundreds by submicron spin valve sensors, proving the feasibility of the nanoparticle detection. As desired for quantitative biodetection, a fairly linear dependence of sensor signal on the number of nanoparticles has been confirmed. The initial detection of DNA hybridization events labeled by magnetic nanoparticles further proved the magnetic biodetection concept.

  14. Perspective: Watching low-frequency vibrations of water in biomolecular recognition by THz spectroscopy

    Science.gov (United States)

    Xu, Yao; Havenith, Martina

    2015-11-01

    Terahertz (THz) spectroscopy has turned out to be a powerful tool which is able to shed new light on the role of water in biomolecular processes. The low frequency spectrum of the solvated biomolecule in combination with MD simulations provides deep insights into the collective hydrogen bond dynamics on the sub-ps time scale. The absorption spectrum between 1 THz and 10 THz of solvated biomolecules is sensitive to changes in the fast fluctuations of the water network. Systematic studies on mutants of antifreeze proteins indicate a direct correlation between biological activity and a retardation of the (sub)-ps hydration dynamics at the protein binding site, i.e., a "hydration funnel." Kinetic THz absorption studies probe the temporal changes of THz absorption during a biological process, and give access to the kinetics of the coupled protein-hydration dynamics. When combined with simulations, the observed results can be explained in terms of a two-tier model involving a local binding and a long range influence on the hydration bond dynamics of the water around the binding site that highlights the significance of the changes in the hydration dynamics at recognition site for biomolecular recognition. Water is shown to assist molecular recognition processes.

  15. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  16. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  17. Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.

    Directory of Open Access Journals (Sweden)

    Daniel L Parton

    2016-06-01

    Full Text Available The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (superfamilies, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest, reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human

  18. NWChem: Quantum Chemistry Simulations at Scale

    Energy Technology Data Exchange (ETDEWEB)

    Apra, Edoardo; Kowalski, Karol; Hammond, Jeff R.; Klemm, Michael

    2015-01-17

    Methods based on quantum mechanics equations have been developed since the 1930's with the purpose of accurately studying the electronic structure of molecules. However, it is only during the last two decades that intense development of new computational algorithms has opened the possibility of performing accurate simulations of challenging molecular processes with high-order many-body methods. A wealth of evidence indicates that the proper inclusion of instantaneous interactions between electrons (or the so-called electron correlation effects) is indispensable for the accurate characterization of chemical reactivity, molecular properties, and interactions of light with matter. The availability of reliable methods for benchmarking of medium-size molecular systems provides also a unique chance to propagate high-level accuracy across spatial scales through the multiscale methodologies. Some of these methods have potential to utilize computational resources in an effi*cient way since they are characterized by high numerical complexity and appropriate level of data granularity, which can be effi*ciently distributed over multi-processor architectures. The broad spectrum of coupled cluster (CC) methods falls into this class of methodologies. Several recent CC implementations clearly demonstrated the scalability of CC formalisms on architectures composed of hundreds thousand computational cores. In this context NWChem provides a collection of Tensor Contraction Engine (TCE) generated parallel implementations of various coupled cluster methods capable of taking advantage of many thousand of cores on leadership class parallel architectures.

  19. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  20. Physics at the biomolecular interface fundamentals for molecular targeted therapy

    CERN Document Server

    Fernández, Ariel

    2016-01-01

    This book focuses primarily on the role of interfacial forces in understanding biological phenomena at the molecular scale. By providing a suitable statistical mechanical apparatus to handle the biomolecular interface, the book becomes uniquely positioned to address core problems in molecular biophysics. It highlights the importance of interfacial tension in delineating a solution to the protein folding problem, in unravelling the physico-chemical basis of enzyme catalysis and protein associations, and in rationally designing molecular targeted therapies. Thus grounded in fundamental science, the book develops a powerful technological platform for drug discovery, while it is set to inspire scientists at any level in their careers determined to address the major challenges in molecular biophysics. The acknowledgment of how exquisitely the structure and dynamics of proteins and their aqueous environment are related attests to the overdue recognition that biomolecular phenomena cannot be effectively understood w...

  1. Integrative NMR for biomolecular research

    International Nuclear Information System (INIS)

    Lee, Woonghee; Cornilescu, Gabriel; Dashti, Hesam; Eghbalnia, Hamid R.; Tonelli, Marco; Westler, William M.; Butcher, Samuel E.; Henzler-Wildman, Katherine A.; Markley, John L.

    2016-01-01

    NMR spectroscopy is a powerful technique for determining structural and functional features of biomolecules in physiological solution as well as for observing their intermolecular interactions in real-time. However, complex steps associated with its practice have made the approach daunting for non-specialists. We introduce an NMR platform that makes biomolecular NMR spectroscopy much more accessible by integrating tools, databases, web services, and video tutorials that can be launched by simple installation of NMRFAM software packages or using a cross-platform virtual machine that can be run on any standard laptop or desktop computer. The software package can be downloaded freely from the NMRFAM software download page ( http://pine.nmrfam.wisc.edu/download-packages.html http://pine.nmrfam.wisc.edu/download_packages.html ), and detailed instructions are available from the Integrative NMR Video Tutorial page ( http://pine.nmrfam.wisc.edu/integrative.html http://pine.nmrfam.wisc.edu/integrative.html ).

  2. Integrative NMR for biomolecular research

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu; Cornilescu, Gabriel; Dashti, Hesam; Eghbalnia, Hamid R.; Tonelli, Marco; Westler, William M.; Butcher, Samuel E.; Henzler-Wildman, Katherine A.; Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States)

    2016-04-15

    NMR spectroscopy is a powerful technique for determining structural and functional features of biomolecules in physiological solution as well as for observing their intermolecular interactions in real-time. However, complex steps associated with its practice have made the approach daunting for non-specialists. We introduce an NMR platform that makes biomolecular NMR spectroscopy much more accessible by integrating tools, databases, web services, and video tutorials that can be launched by simple installation of NMRFAM software packages or using a cross-platform virtual machine that can be run on any standard laptop or desktop computer. The software package can be downloaded freely from the NMRFAM software download page ( http://pine.nmrfam.wisc.edu/download-packages.html http://pine.nmrfam.wisc.edu/download{sub p}ackages.html ), and detailed instructions are available from the Integrative NMR Video Tutorial page ( http://pine.nmrfam.wisc.edu/integrative.html http://pine.nmrfam.wisc.edu/integrative.html ).

  3. A QM-MD simulation approach to the analysis of FRET processes in (bio)molecular systems. A case study: complexes of E. coli purine nucleoside phosphorylase and its mutants with formycin A.

    Science.gov (United States)

    Sobieraj, M; Krzyśko, K A; Jarmuła, A; Kalinowski, M W; Lesyng, B; Prokopowicz, M; Cieśla, J; Gojdź, A; Kierdaszuk, B

    2015-04-01

    Predicting FRET pathways in proteins using computer simulation techniques is very important for reliable interpretation of experimental data. A novel and relatively simple methodology has been developed and applied to purine nucleoside phosphorylase (PNP) complexed with a fluorescent ligand - formycin A (FA). FRET occurs between an excited Tyr residue (D*) and FA (A). This study aims to interpret experimental data that, among others, suggests the absence of FRET for the PNPF159A mutant in complex with FA, based on novel theoretical methodology. MD simulations for the protein molecule containing D*, and complexed with A, are carried out. Interactions of D* with its molecular environment are accounted by including changes of the ESP charges in S1, compared to S0, and computed at the SCF-CI level. FRET probability W F depends on the inverse six-power of the D*-A distance, R da . The orientational factor 0 < k(2) < 4 between D* and A is computed and included in the analysis. Finally W F is time-averaged over the MD trajectories resulting in its mean value. The red-shift of the tyrosinate anion emission and thus lack of spectral overlap integral and thermal energy dissipation are the reasons for the FRET absence in the studied mutants at pH 7 and above. The presence of the tyrosinate anion results in a competitive energy dissipation channel and red-shifted emission, thus in consequence in the absence of FRET. These studies also indicate an important role of the phenyl ring of Phe159 for FRET in the wild-type PNP, which does not exist in the Ala159 mutant, and for the effective association of PNP with FA. In a more general context, our observations point out very interesting and biologically important properties of the tyrosine residue in its excited state, which may undergo spontaneous deprotonation in the biomolecular systems, resulting further in unexpected physical and/or biological phenomena. Until now, this observation has not been widely discussed in the

  4. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  5. Conducting polymer based biomolecular electronic devices

    Indian Academy of Sciences (India)

    Conducting polymers; LB films; biosensor microactuators; monolayers. ... have been projected for applications for a wide range of biomolecular electronic devices such as optical, electronic, drug-delivery, memory and biosensing devices.

  6. Biomolecular System Design: Architecture, Synthesis, and Simulation

    OpenAIRE

    Chiang , Katherine

    2015-01-01

    The advancements in systems and synthetic biology have been broadening the range of realizable systems with increasing complexity both in vitro and in vivo. Systems for digital logic operations, signal processing, analog computation, program flow control, as well as those composed of different functions – for example an on-site diagnostic system based on multiple biomarker measurements and signal processing – have been realized successfully. However, the efforts to date tend to tackle each de...

  7. Multiscale Persistent Functions for Biomolecular Structure Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin [Nanyang Technological University (Singapore). Division of Mathematical Sciences, School of Physical, Mathematical Sciences and School of Biological Sciences; Li, Zhiming [Central China Normal University, Wuhan (China). Key Laboratory of Quark and Lepton Physics (MOE) and Institute of Particle Physics; Mu, Lin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division

    2017-11-02

    Here in this paper, we introduce multiscale persistent functions for biomolecular structure characterization. The essential idea is to combine our multiscale rigidity functions (MRFs) with persistent homology analysis, so as to construct a series of multiscale persistent functions, particularly multiscale persistent entropies, for structure characterization. To clarify the fundamental idea of our method, the multiscale persistent entropy (MPE) model is discussed in great detail. Mathematically, unlike the previous persistent entropy (Chintakunta et al. in Pattern Recognit 48(2):391–401, 2015; Merelli et al. in Entropy 17(10):6872–6892, 2015; Rucco et al. in: Proceedings of ECCS 2014, Springer, pp 117–128, 2016), a special resolution parameter is incorporated into our model. Various scales can be achieved by tuning its value. Physically, our MPE can be used in conformational entropy evaluation. More specifically, it is found that our method incorporates in it a natural classification scheme. This is achieved through a density filtration of an MRF built from angular distributions. To further validate our model, a systematical comparison with the traditional entropy evaluation model is done. Additionally, it is found that our model is able to preserve the intrinsic topological features of biomolecular data much better than traditional approaches, particularly for resolutions in the intermediate range. Moreover, by comparing with traditional entropies from various grid sizes, bond angle-based methods and a persistent homology-based support vector machine method (Cang et al. in Mol Based Math Biol 3:140–162, 2015), we find that our MPE method gives the best results in terms of average true positive rate in a classic protein structure classification test. More interestingly, all-alpha and all-beta protein classes can be clearly separated from each other with zero error only in our model. Finally, a special protein structure index (PSI) is proposed, for the first

  8. The latest full-scale PWR simulator in Japan

    International Nuclear Information System (INIS)

    Nishimuru, Y.; Tagi, H.; Nakabayashi, T.

    2004-01-01

    The latest MHI Full-scale Simulator has an excellent system configuration, in both flexibility and extendability, and has highly sophisticated performance in PWR simulation by the adoption of CANAC-II and PRETTY codes. It also has an instructive character to display the plant's internal status, such as RCS condition, through animation. Further, the simulation has been verified to meet a functional examination at model plant, and with a scale model test result in a two-phase flow event, after evaluation for its accuracy. Thus, the Simulator can be devoted to a sophisticated and broad training course on PWR operation. (author)

  9. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  10. Improving the Performance of the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  11. Multi-Scale Simulation of High Energy Density Ionic Liquids

    National Research Council Canada - National Science Library

    Voth, Gregory A

    2007-01-01

    The focus of this AFOSR project was the molecular dynamics (MD) simulation of ionic liquid structure, dynamics, and interfacial properties, as well as multi-scale descriptions of these novel liquids (e.g...

  12. Synergy of Two Highly Specific Biomolecular Recognition Events

    DEFF Research Database (Denmark)

    Ejlersen, Maria; Christensen, Niels Johan; Sørensen, Kasper K

    2018-01-01

    Two highly specific biomolecular recognition events, nucleic acid duplex hybridization and DNA-peptide recognition in the minor groove, were coalesced in a miniature ensemble for the first time by covalently attaching a natural AT-hook peptide motif to nucleic acid duplexes via a 2'-amino......-LNA scaffold. A combination of molecular dynamics simulations and ultraviolet thermal denaturation studies revealed high sequence-specific affinity of the peptide-oligonucleotide conjugates (POCs) when binding to complementary DNA strands, leveraging the bioinformation encrypted in the minor groove of DNA...

  13. Simulating Catchment Scale Afforestation for Mitigating Flooding

    Science.gov (United States)

    Barnes, M. S.; Bathurst, J. C.; Quinn, P. F.; Birkinshaw, S.

    2016-12-01

    After the 2013-14, and the more recent 2015-16, winter floods in the UK there were calls to 'forest the uplands' as a solution to reducing flood risk across the nation. However, the role of forests as a natural flood management practice remains highly controversial, due to a distinct lack of robust evidence into its effectiveness in reducing flood risk during extreme events. This project aims to improve the understanding of the impacts of upland afforestation on flood risk at the sub-catchment and full catchment scales. This will be achieved through an integrated fieldwork and modelling approach, with the use of a series of process based hydrological models to scale up and examine the effects forestry can have on flooding. Furthermore, there is a need to analyse the extent to which land management practices, catchment system engineering and the installation of runoff attenuation features (RAFs), such as engineered log jams, in headwater catchments can attenuate flood-wave movement, and potentially reduce downstream flood risk. Additionally, the proportion of a catchment or riparian reach that would need to be forested in order to achieve a significant impact on reducing downstream flooding will be defined. The consequential impacts of a corresponding reduction in agriculturally productive farmland and the potential decline of water resource availability will also be considered in order to safeguard the UK's food security and satisfy the global demand on water resources.

  14. Multi-scale simulation for homogenization of cement media

    International Nuclear Information System (INIS)

    Abballe, T.

    2011-01-01

    To solve diffusion problems on cement media, two scales must be taken into account: a fine scale, which describes the micrometers wide microstructures present in the media, and a work scale, which is usually a few meters long. Direct numerical simulations are almost impossible because of the huge computational resources (memory, CPU time) required to assess both scales at the same time. To overcome this problem, we present in this thesis multi-scale resolution methods using both Finite Volumes and Finite Elements, along with their efficient implementations. More precisely, we developed a multi-scale simulation tool which uses the SALOME platform to mesh domains and post-process data, and the parallel calculation code MPCube to solve problems. This SALOME/MPCube tool can solve automatically and efficiently multi-scale simulations. Parallel structure of computer clusters can be use to dispatch the more time-consuming tasks. We optimized most functions to account for cement media specificities. We presents numerical experiments on various cement media samples, e.g. mortar and cement paste. From these results, we manage to compute a numerical effective diffusivity of our cement media and to reconstruct a fine scale solution. (author) [fr

  15. Microfluidic Devices for Studying Biomolecular Interactions

    Science.gov (United States)

    Wilson, Wilbur W.; Garcia, Carlos d.; Henry, Charles S.

    2006-01-01

    Microfluidic devices for monitoring biomolecular interactions have been invented. These devices are basically highly miniaturized liquid-chromatography columns. They are intended to be prototypes of miniature analytical devices of the laboratory on a chip type that could be fabricated rapidly and inexpensively and that, because of their small sizes, would yield analytical results from very small amounts of expensive analytes (typically, proteins). Other advantages to be gained by this scaling down of liquid-chromatography columns may include increases in resolution and speed, decreases in the consumption of reagents, and the possibility of performing multiple simultaneous and highly integrated analyses by use of multiple devices of this type, each possibly containing multiple parallel analytical microchannels. The principle of operation is the same as that of a macroscopic liquid-chromatography column: The column is a channel packed with particles, upon which are immobilized molecules of the protein of interest (or one of the proteins of interest if there are more than one). Starting at a known time, a solution or suspension containing molecules of the protein or other substance of interest is pumped into the channel at its inlet. The liquid emerging from the outlet of the channel is monitored to detect the molecules of the dissolved or suspended substance(s). The time that it takes these molecules to flow from the inlet to the outlet is a measure of the degree of interaction between the immobilized and the dissolved or suspended molecules. Depending on the precise natures of the molecules, this measure can be used for diverse purposes: examples include screening for solution conditions that favor crystallization of proteins, screening for interactions between drugs and proteins, and determining the functions of biomolecules.

  16. Photochirogenesis: Photochemical Models on the Origin of Biomolecular Homochirality

    Directory of Open Access Journals (Sweden)

    Cornelia Meinert

    2010-05-01

    Full Text Available Current research focuses on a better understanding of the origin of biomolecular asymmetry by the identification and detection of the possibly first chiral molecules that were involved in the appearance and evolution of life on Earth. We have reasons to assume that these molecules were specific chiral amino acids. Chiral amino acids have been identified in both chondritic meteorites and simulated interstellar ices. Present research reasons that circularly polarized electromagnetic radiation was identified in interstellar environments and an asymmetric interstellar photon-molecule interaction might have triggered biomolecular symmetry breaking. We review on the possible prebiotic interaction of ‘chiral photons’ in the form of circularly polarized light, with early chiral organic molecules. We will highlight recent studies on enantioselective photolysis of racemic amino acids by circularly polarized light and experiments on the asymmetric photochemical synthesis of amino acids from only one C and one N containing molecules by simulating interstellar environments. Both approaches are based on circular dichroic transitions of amino acids that will be presented as well.

  17. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  18. Formalizing Knowledge in Multi-Scale Agent-Based Simulations.

    Science.gov (United States)

    Somogyi, Endre; Sluka, James P; Glazier, James A

    2016-10-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.

  19. Development of porous structure simulator for multi-scale simulation of irregular porous catalysts

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Suzuki, Ai; Sahnoun, Riadh; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A.; Miyamoto, Akira

    2008-01-01

    Efficient development of highly functional porous materials, used as catalysts in the automobile industry, demands a meticulous knowledge of the nano-scale interface at the electronic and atomistic scale. However, it is often difficult to correlate the microscopic interfacial interactions with macroscopic characteristics of the materials; for instance, the interaction between a precious metal and its support oxide with long-term sintering properties of the catalyst. Multi-scale computational chemistry approaches can contribute to bridge the gap between micro- and macroscopic characteristics of these materials; however this type of multi-scale simulations has been difficult to apply especially to porous materials. To overcome this problem, we have developed a novel mesoscopic approach based on a porous structure simulator. This simulator can construct automatically irregular porous structures on a computer, enabling simulations with complex meso-scale structures. Moreover, in this work we have developed a new method to simulate long-term sintering properties of metal particles on porous catalysts. Finally, we have applied the method to the simulation of sintering properties of Pt on alumina support. This newly developed method has enabled us to propose a multi-scale simulation approach for porous catalysts

  20. Probabilistic Simulation of Multi-Scale Composite Behavior

    Science.gov (United States)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  1. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    Science.gov (United States)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  2. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  3. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  4. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui; Hsu, Shu-wei; McNamara, Ann; Keyser, John

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  5. Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition

    International Nuclear Information System (INIS)

    Farnudi, Bahman; Vvedensky, Dimitri D

    2011-01-01

    Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.

  6. Laboratory-scale simulations with hydrated lime and organic ...

    African Journals Online (AJOL)

    Laboratory-scale simulations with hydrated lime and organic polymer to evaluate the effect of pre-chlorination on motile Ceratium hirundinella cells during ... When organic material is released from algal cells as a result of physical-chemical impacts on the cells, it may result in tasteand odour-related problems or the ...

  7. Microsecond atomic-scale molecular dynamics simulations of polyimides

    NARCIS (Netherlands)

    Lyulin, S.V.; Gurtovenko, A.A.; Larin, S.V.; Nazarychev, V.M.; Lyulin, A.V.

    2013-01-01

    We employ microsecond atomic-scale molecular dynamics simulations to get insight into the structural and thermal properties of heat-resistant bulk polyimides. As electrostatic interactions are essential for the polyimides considered, we propose a two-step equilibration protocol that includes long

  8. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n

    2010-01-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one

  9. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  10. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  11. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  12. Fully kinetic simulations of megajoule-scale dense plasma focus

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, A.; Link, A.; Tang, V.; Halvorson, C.; May, M. [Lawrence Livermore National Laboratory, Livermore California 94550 (United States); Welch, D. [Voss Scientific, LLC, Albuquerque, New Mexico 87108 (United States); Meehan, B. T.; Hagen, E. C. [National Security Technologies, LLC, Las Vegas, Nevada 89030 (United States)

    2014-10-15

    Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 10{sup 12} neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.

  13. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  14. Computational methods to study the structure and dynamics of biomolecules and biomolecular processes from bioinformatics to molecular quantum mechanics

    CERN Document Server

    2014-01-01

    Since the second half of the 20th century machine computations have played a critical role in science and engineering. Computer-based techniques have become especially important in molecular biology, since they often represent the only viable way to gain insights into the behavior of a biological system as a whole. The complexity of biological systems, which usually needs to be analyzed on different time- and size-scales and with different levels of accuracy, requires the application of different approaches, ranging from comparative analysis of sequences and structural databases, to the analysis of networks of interdependence between cell components and processes, through coarse-grained modeling to atomically detailed simulations, and finally to molecular quantum mechanics. This book provides a comprehensive overview of modern computer-based techniques for computing the structure, properties and dynamics of biomolecules and biomolecular processes. The twenty-two chapters, written by scientists from all over t...

  15. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  16. Simple scaling for faster tracking simulation in accelerator multiparticle dynamics

    International Nuclear Information System (INIS)

    MacLachlan, J.A.

    2001-01-01

    Macroparticle tracking is a direct and attractive approach to following the evolution of a phase space distribution. When the particles interact through short range wake fields or when inter-particle force is included, calculations of this kind require a large number of macroparticles. It is possible to reduce both the number of macroparticles required and the number of tracking steps per unit simulated time by employing a simple scaling which can be inferred directly from the single-particle equations of motion. In many cases of practical importance the speed of calculation improves with the fourth power of the scaling constant. Scaling has been implemented in an existing longitudinal tracking code; early experience supports the concept and promises major time savings. Limitations on the scaling are discussed

  17. EXTENDED SCALING LAWS IN NUMERICAL SIMULATIONS OF MAGNETOHYDRODYNAMIC TURBULENCE

    International Nuclear Information System (INIS)

    Mason, Joanne; Cattaneo, Fausto; Perez, Jean Carlos; Boldyrev, Stanislav

    2011-01-01

    Magnetized turbulence is ubiquitous in astrophysical systems, where it notoriously spans a broad range of spatial scales. Phenomenological theories of MHD turbulence describe the self-similar dynamics of turbulent fluctuations in the inertial range of scales. Numerical simulations serve to guide and test these theories. However, the computational power that is currently available restricts the simulations to Reynolds numbers that are significantly smaller than those in astrophysical settings. In order to increase computational efficiency and, therefore, probe a larger range of scales, one often takes into account the fundamental anisotropy of field-guided MHD turbulence, with gradients being much slower in the field-parallel direction. The simulations are then optimized by employing the reduced MHD equations and relaxing the field-parallel numerical resolution. In this work we explore a different possibility. We propose that there exist certain quantities that are remarkably stable with respect to the Reynolds number. As an illustration, we study the alignment angle between the magnetic and velocity fluctuations in MHD turbulence, measured as the ratio of two specially constructed structure functions. We find that the scaling of this ratio can be extended surprisingly well into the regime of relatively low Reynolds number. However, the extended scaling easily becomes spoiled when the dissipation range in the simulations is underresolved. Thus, taking the numerical optimization methods too far can lead to spurious numerical effects and erroneous representation of the physics of MHD turbulence, which in turn can affect our ability to identify correctly the physical mechanisms that are operating in astrophysical systems.

  18. Application of Nanodiamonds in Biomolecular Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Ping Cheng

    2010-03-01

    Full Text Available The combination of nanodiamond (ND with biomolecular mass spectrometry (MS makes rapid, sensitive detection of biopolymers from complex biosamples feasible. Due to its chemical inertness, optical transparency and biocompatibility, the advantage of NDs in MS study is unique. Furthermore, functionalization on the surfaces of NDs expands their application in the fields of proteomics and genomics for specific requirements greatly. This review presents methods of MS analysis based on solid phase extraction and elution on NDs and different application examples including peptide, protein, DNA, glycan and others. Owing to the quick development of nanotechnology, surface chemistry, new MS methods and the intense interest in proteomics and genomics, a huge increase of their applications in biomolecular MS analysis in the near future can be predicted.

  19. Membrane-based biomolecular smart materials

    International Nuclear Information System (INIS)

    Sarles, Stephen A; Leo, Donald J

    2011-01-01

    Membrane-based biomolecular materials are a new class of smart material that feature networks of artificial lipid bilayers contained within durable synthetic substrates. Bilayers contained within this modular material platform provide an environment that can be tailored to host an enormous diversity of functional biomolecules, where the functionality of the global material system depends on the type(s) and organization(s) of the biomolecules that are chosen. In this paper, we review a series of biomolecular material platforms developed recently within the Leo Group at Virginia Tech and we discuss several novel coupling mechanisms provided by these hybrid material systems. The platforms developed demonstrate that the functions of biomolecules and the properties of synthetic materials can be combined to operate in concert, and the examples provided demonstrate how the formation and properties of a lipid bilayer can respond to a variety of stimuli including mechanical forces and electric fields

  20. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  1. Aligning Biomolecular Networks Using Modular Graph Kernels

    Science.gov (United States)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  2. NMRbox: A Resource for Biomolecular NMR Computation.

    Science.gov (United States)

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  3. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  4. Direct Numerical Simulation of Low Capillary Number Pore Scale Flows

    Science.gov (United States)

    Esmaeilzadeh, S.; Soulaine, C.; Tchelepi, H.

    2017-12-01

    The arrangement of void spaces and the granular structure of a porous medium determines multiple macroscopic properties of the rock such as porosity, capillary pressure, and relative permeability. Therefore, it is important to study the microscopic structure of the reservoir pores and understand the dynamics of fluid displacements through them. One approach for doing this, is direct numerical simulation of pore-scale flow that requires a robust numerical tool for prediction of fluid dynamics and a detailed understanding of the physical processes occurring at the pore-scale. In pore scale flows with a low capillary number, Eulerian multiphase methods are well-known to produce additional vorticity close to the interface. This is mainly due to discretization errors which lead to an imbalance of capillary pressure and surface tension forces that causes unphysical spurious currents. At the pore scale, these spurious currents can become significantly stronger than the average velocity in the phases, and lead to unphysical displacement of the interface. In this work, we first investigate the capability of the algebraic Volume of Fluid (VOF) method in OpenFOAM for low capillary number pore scale flow simulations. Afterward, we compare VOF results with a Coupled Level-Set Volume of Fluid (CLSVOF) method and Iso-Advector method. It has been shown that the former one reduces the VOF's unphysical spurious currents in some cases, and both are known to capture interfaces sharper than VOF. As the conclusion, we will investigate that whether the use of CLSVOF or Iso-Advector will lead to less spurious velocities and more accurate results for capillary driven pore-scale multiphase flows or not. Keywords: Pore-scale multiphase flow, Capillary driven flows, Spurious currents, OpenFOAM

  5. A statistical nanomechanism of biomolecular patterning actuated by surface potential

    Science.gov (United States)

    Lin, Chih-Ting; Lin, Chih-Hao

    2011-02-01

    Biomolecular patterning on a nanoscale/microscale on chip surfaces is one of the most important techniques used in vitro biochip technologies. Here, we report upon a stochastic mechanics model we have developed for biomolecular patterning controlled by surface potential. The probabilistic biomolecular surface adsorption behavior can be modeled by considering the potential difference between the binding and nonbinding states. To verify our model, we experimentally implemented a method of electroactivated biomolecular patterning technology and the resulting fluorescence intensity matched the prediction of the developed model quite well. Based on this result, we also experimentally demonstrated the creation of a bovine serum albumin pattern with a width of 200 nm in 5 min operations. This submicron noncovalent-binding biomolecular pattern can be maintained for hours after removing the applied electrical voltage. These stochastic understandings and experimental results not only prove the feasibility of submicron biomolecular patterns on chips but also pave the way for nanoscale interfacial-bioelectrical engineering.

  6. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  7. A real scale simulator for high frequency LEMP

    Science.gov (United States)

    Gauthier, D.; Serafin, D.

    1991-01-01

    The real scale simulator is described which was designed by the Centre d'Etudes de Gramat (CEG) to study the coupling of fast rise time Lightning Electromagnetic pulse in a fighter aircraft. The system capability of generating the right electromagnetic environment was studied using a Finite Difference Time Domain (FDTD) computer program. First, data of inside stresses are shown. Then, a time domain and a frequency domain approach is exposed and compared.

  8. Simulating Biomass Fast Pyrolysis at the Single Particle Scale

    Energy Technology Data Exchange (ETDEWEB)

    Ciesielski, Peter [National Renewable Energy Laboratory (NREL); Wiggins, Gavin [ORNL; Daw, C Stuart [ORNL; Jakes, Joseph E. [U.S. Forest Service, Forest Products Laboratory, Madison, Wisconsin, USA

    2017-07-01

    Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level of structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.

  9. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  10. Numerical simulation of small scale soft impact tests

    International Nuclear Information System (INIS)

    Varpasuo, Pentti

    2008-01-01

    This paper describes the small scale soft missile impact tests. The purpose of the test program is to provide data for the calibration of the numerical simulation models for impact simulation. In the experiments, both dry and fluid filled missiles are used. The tests with fluid filled missiles investigate the release speed and the droplet size of the fluid release. This data is important in quantifying the fire hazard of flammable liquid after the release. The spray release velocity and droplet size are also input data for analytical and numerical simulation of the liquid spread in the impact. The behaviour of the impact target is the second investigative goal of the test program. The response of reinforced and pre-stressed concrete walls is studied with the aid of displacement and strain monitoring. (authors)

  11. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  12. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  13. Understanding bulk behavior of particulate materials from particle scale simulations

    Science.gov (United States)

    Deng, Xiaoliang

    Particulate materials play an increasingly significant role in various industries, such as pharmaceutical manufacturing, food, mining, and civil engineering. The objective of this research is to better understand bulk behaviors of particulate materials from particle scale simulations. Packing properties of assembly of particles are investigated first, focusing on the effects of particle size, surface energy, and aspect ratio on the coordination number, porosity, and packing structures. The simulation results show that particle sizes, surface energy, and aspect ratio all influence the porosity of packing to various degrees. The heterogeneous force networks within particle assembly under external compressive loading are investigated as well. The results show that coarse-coarse contacts dominate the strong network and coarse-fine contacts dominate the total network. Next, DEM models are developed to simulate the particle dynamics inside a conical screen mill (comil) and magnetically assisted impaction mixer (MAIM), both are important particle processing devices. For comil, the mean residence time (MRT), spatial distribution of particles, along with the collision dynamics between particles as well as particle and vessel geometries are examined as a function of the various operating parameters such as impeller speed, screen hole size, open area, and feed rate. The simulation results can help better understand dry coating experimental results using comil. For MAIM system, the magnetic force is incorporated into the contact model, allowing to describe the interactions between magnets. The simulation results reveal the connections between homogeneity of mixture and particle scale variables such as size of magnets and surface energy of non-magnets. In particular, at the fixed mass ratio of magnets to non-magnets and surface energy the smaller magnets lead to better homogeneity of mixing, which is in good agreement with previously published experimental results. Last but not

  14. Orientation of biomolecular assemblies in a microfluidic jet

    International Nuclear Information System (INIS)

    Priebe, M; Kalbfleisch, S; Tolkiehn, M; Salditt, T; Koester, S; Abel, B; Davies, R J

    2010-01-01

    We have investigated multilamellar lipid assemblies in a microfluidic jet, operating at high shear rates of the order of 10 7 s -1 . Compared to classical Couette cells or rheometers, the shear rate was increased by at least 2-3 orders of magnitude, and the sample volume was scaled down correspondingly. At the same time, the jet is characterized by high extensional stress due to elongational flow. A focused synchrotron x-ray beam was used to measure the structure and orientation of the lipid assemblies in the jet. The diffraction patterns indicate conventional multilamellar phases, aligned with the membrane normals oriented along the velocity gradient of the jet. The results indicate that the setup may be well suited for coherent diffractive imaging of oriented biomolecular assemblies and macromolecules at the future x-ray free electron laser (XFEL) sources.

  15. DNA-assisted swarm control in a biomolecular motor system.

    Science.gov (United States)

    Keya, Jakia Jannat; Suzuki, Ryuhei; Kabir, Arif Md Rashedul; Inoue, Daisuke; Asanuma, Hiroyuki; Sada, Kazuki; Hess, Henry; Kuzuya, Akinori; Kakugo, Akira

    2018-01-31

    In nature, swarming behavior has evolved repeatedly among motile organisms because it confers a variety of beneficial emergent properties. These include improved information gathering, protection from predators, and resource utilization. Some organisms, e.g., locusts, switch between solitary and swarm behavior in response to external stimuli. Aspects of swarming behavior have been demonstrated for motile supramolecular systems composed of biomolecular motors and cytoskeletal filaments, where cross-linkers induce large scale organization. The capabilities of such supramolecular systems may be further extended if the swarming behavior can be programmed and controlled. Here, we demonstrate that the swarming of DNA-functionalized microtubules (MTs) propelled by surface-adhered kinesin motors can be programmed and reversibly regulated by DNA signals. Emergent swarm behavior, such as translational and circular motion, can be selected by tuning the MT stiffness. Photoresponsive DNA containing azobenzene groups enables switching between solitary and swarm behavior in response to stimulation with visible or ultraviolet light.

  16. THz time domain spectroscopy of biomolecular conformational modes

    International Nuclear Information System (INIS)

    Markelz, Andrea; Whitmire, Scott; Hillebrecht, Jay; Birge, Robert

    2002-01-01

    We discuss the use of terahertz time domain spectroscopy for studies of conformational flexibility and conformational change in biomolecules. Protein structural dynamics are vital to biological function with protein flexibility affecting enzymatic reaction rates and sensory transduction cycling times. Conformational mode dynamics occur on the picosecond timescale and with the collective vibrational modes associated with these large scale structural motions in the 1-100 cm -1 range. We have performed THz time domain spectroscopy (TTDS) of several biomolecular systems to explore the sensitivity of TTDS to distinguish different molecular species, different mutations within a single species and different conformations of a given biomolecule. We compare the measured absorbances to normal mode calculations and find that the TTDS absorbance reflects the density of normal modes determined by molecular mechanics calculations, and is sensitive to both conformation and mutation. These early studies demonstrate some of the advantages and limitations of using TTDS for the study of biomolecules

  17. Numerical simulation of a small-scale biomass boiler

    International Nuclear Information System (INIS)

    Collazo, J.; Porteiro, J.; Míguez, J.L.; Granada, E.; Gómez, M.A.

    2012-01-01

    Highlights: ► Simplified model for biomass combustion was developed. ► Porous zone conditions are used in the bed. ► Model is fully integrated in a commercial CFD code to simulate a small scale pellet boiler. ► Pollutant emissions are well predicted. ► Simulation provides extensive information about the behaviour of the boiler. - Abstract: This paper presents a computational fluid dynamic simulation of a domestic pellet boiler. Combustion of the solid fuel in the burner is an important issue when discussing the simulation of this type of system. A simplified method based on a thermal balance was developed in this work to introduce the effects provoked by pellet combustion in the boiler simulation. The model predictions were compared with the experimental measurements, and a good agreement was found. The results of the boiler analysis show that the position of the water tubes, the distribution of the air inlets and the air infiltrations are the key factors leading to the high emission levels present in this type of system.

  18. Atomistic simulations of graphite etching at realistic time scales.

    Science.gov (United States)

    Aussems, D U B; Bal, K M; Morgan, T W; van de Sanden, M C M; Neyts, E C

    2017-10-01

    Hydrogen-graphite interactions are relevant to a wide variety of applications, ranging from astrophysics to fusion devices and nano-electronics. In order to shed light on these interactions, atomistic simulation using Molecular Dynamics (MD) has been shown to be an invaluable tool. It suffers, however, from severe time-scale limitations. In this work we apply the recently developed Collective Variable-Driven Hyperdynamics (CVHD) method to hydrogen etching of graphite for varying inter-impact times up to a realistic value of 1 ms, which corresponds to a flux of ∼10 20 m -2 s -1 . The results show that the erosion yield, hydrogen surface coverage and species distribution are significantly affected by the time between impacts. This can be explained by the higher probability of C-C bond breaking due to the prolonged exposure to thermal stress and the subsequent transition from ion- to thermal-induced etching. This latter regime of thermal-induced etching - chemical erosion - is here accessed for the first time using atomistic simulations. In conclusion, this study demonstrates that accounting for long time-scales significantly affects ion bombardment simulations and should not be neglected in a wide range of conditions, in contrast to what is typically assumed.

  19. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  20. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  1. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  2. Validation of Simulation Model for Full Scale Wave Simulator and Discrete Fuild Power PTO System

    DEFF Research Database (Denmark)

    Hansen, Anders Hedegaard; Pedersen, Henrik C.; Hansen, Rico Hjerm

    2014-01-01

    In controller development for large scale machinery a good simulation model may serve as a time and money saving factor as well as a safety precaution. Having good models enables the developer to design and test control strategies in a safe and possibly less time consuming environment. For applic...

  3. Micro and Nanotechnologies Enhanced Biomolecular Sensing

    Directory of Open Access Journals (Sweden)

    Tza-Huei Wang

    2013-07-01

    Full Text Available This editorial summarizes some of the recent advances of micro and nanotechnology-based tools and devices for biomolecular detection. These include the incorporation of nanomaterials into a sensor surface or directly interfacing with molecular probes to enhance target detection via more rapid and sensitive responses, and the use of self-assembled organic/inorganic nanocomposites that inhibit exceptional spectroscopic properties to enable facile homogenous assays with efficient binding kinetics. Discussions also include some insight into microfluidic principles behind the development of an integrated sample preparation and biosensor platform toward a miniaturized and fully functional system for point of care applications.

  4. Fire spread simulation of a full scale cable tunnel

    International Nuclear Information System (INIS)

    Huhtanen, R.

    1999-11-01

    A fire simulation of a full scale tunnel was performed by using the commercial code EFFLUENT as the simulation platform. Estimation was made for fire spread on the stacked cable trays, possibility of fire spread to the cable trays on the opposite wall of the tunnel, detection time of smoke detectors in the smouldering phase and response of sprinkler heads in the flaming phase. According to the simulation, the rise of temperature in the smouldering phase is minimal, only of the order 1 deg C. The estimates of optical density of smoke show that normal smoke detectors should give an alarm within 2-4 minutes from the beginning of the smouldering phase, depending on the distance to the detector (in this case it was assumed that the thermal source connected to the smoke source was 50 W). The flow conditions at smoke detectors may be challenging, because the velocity magnitude is rather low at this phase. At 4 minutes the maximum velocity at the detectors is 0.12 m/s. During the flaming phase (beginning from 11 minutes) fire spreads on the stacked cable trays in an expected way, although the ignition criterion seems to perform poorly when ignition of new objects is considered. The Upper cable trays are forced to ignite by boundary condition definitions according to the experience found from ti full scale experiment and an earlier simulation. After 30 minutes the hot layer in the room becomes so hot that it speeds up the fire spread and the rate of heat release of burning objects. Further, the hot layer ignites the cable trays on the opposite wall of the tunnel after 45 minutes. It is estimated that the sprinkler heads would be activated at 20-22 minutes near the fire source and at 24-28 minutes little further from the fire source when fast sprinkler heads are used. The slow heads are activated between 26-32 minutes. (orig.)

  5. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    DEFF Research Database (Denmark)

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.

    2016-01-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...... the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads...

  6. pH in atomic scale simulations of electrochemical interfaces

    DEFF Research Database (Denmark)

    Rossmeisl, Jan; Chan, Karen; Ahmed, Rizwan

    2013-01-01

    Electrochemical reaction rates can strongly depend on pH, and there is increasing interest in electrocatalysis in alkaline solution. To date, no method has been devised to address pH in atomic scale simulations. We present a simple method to determine the atomic structure of the metal......|solution interface at a given pH and electrode potential. Using Pt(111)|water as an example, we show the effect of pH on the interfacial structure, and discuss its impact on reaction energies and barriers. This method paves the way for ab initio studies of pH effects on the structure and electrocatalytic activity...

  7. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  8. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed; Kadoura, Ahmad Salim; Sun, Shuyu

    2016-01-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation

  9. Simulation of flow in dual-scale porous media

    Science.gov (United States)

    Tan, Hua

    Liquid composite molding (LCM) is one of the most effective processes for manufacturing near net-shaped parts from fiber-reinforced polymer composites. The quality of LCM products and the efficiency of the process depend strongly on the wetting of fiber preforms during the mold-filling stage of LCM. Mold-filling simulation is a very effective approach to optimize the LCM process and mold design. Recent studies have shown that the flow modeling for the single-scale fiber preforms (made from random mats) has difficulties in accurately predicting the wetting in the dual-scale fiber preforms (made from woven and stitched fabrics); the latter are characterized by the presence of unsaturated flow created due to two distinct length-scales of pores (i.e., large pores outside the tows and small pores inside the tows) in the same media. In this study, we first develop a method to evaluate the accuracy of the permeability-measuring devices for LCM, and conduct a series of 1-D mold-filling experiments for different dual-scale fabrics. The volume averaging method is then applied to derive the averaged governing equations for modeling the macroscopic flow through the dual-scale fabrics. The two sets of governing equations are coupled with each other through the sink terms representing the absorptions of mass, energy, and species (degree of resin cure) from the global flow by the local fiber tows. The finite element method (FEM) coupled with the control volume method, also known as the finite element/control volume (FE/CV) method, is employed to solve the governing equations and track the moving boundary signifying the moving liquid-front. The numerical computations are conducted with the help of an in-house developed computer program called PORE-FLOW(c). We develop the flux-corrected transport (FCT) based FEM to stabilize the convection-dominated energy and species equations. A fast methodology is proposed to simulate the dual-scale flow under isothermal conditions, where flow

  10. High-speed AFM for Studying Dynamic Biomolecular Processes

    Science.gov (United States)

    Ando, Toshio

    2008-03-01

    Biological molecules show their vital activities only in aqueous solutions. It had been one of dreams in biological sciences to directly observe biological macromolecules (protein, DNA) at work under a physiological condition because such observation is straightforward to understanding their dynamic behaviors and functional mechanisms. Optical microscopy has no sufficient spatial resolution and electron microscopy is not applicable to in-liquid samples. Atomic force microscopy (AFM) can visualize molecules in liquids at high resolution but its imaging rate was too low to capture dynamic biological processes. This slow imaging rate is because AFM employs mechanical probes (cantilevers) and mechanical scanners to detect the sample height at each pixel. It is quite difficult to quickly move a mechanical device of macroscopic size with sub-nanometer accuracy without producing unwanted vibrations. It is also difficult to maintain the delicate contact between a probe tip and fragile samples. Two key techniques are required to realize high-speed AFM for biological research; fast feedback control to maintain a weak tip-sample interaction force and a technique to suppress mechanical vibrations of the scanner. Various efforts have been carried out in the past decade to materialize high-speed AFM. The current high-speed AFM can capture images on video at 30-60 frames/s for a scan range of 250nm and 100 scan lines, without significantly disturbing week biomolecular interaction. Our recent studies demonstrated that this new microscope can reveal biomolecular processes such as myosin V walking along actin tracks and association/dissociation dynamics of chaperonin GroEL-GroES that occurs in a negatively cooperative manner. The capacity of nanometer-scale visualization of dynamic processes in liquids will innovate on biological research. In addition, it will open a new way to study dynamic chemical/physical processes of various phenomena that occur at the liquid-solid interfaces.

  11. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  12. Smartphones for cell and biomolecular detection.

    Science.gov (United States)

    Liu, Xiyuan; Lin, Tung-Yi; Lillehoj, Peter B

    2014-11-01

    Recent advances in biomedical science and technology have played a significant role in the development of new sensors and assays for cell and biomolecular detection. Generally, these efforts are aimed at reducing the complexity and costs associated with diagnostic testing so that it can be performed outside of a laboratory or hospital setting, requiring minimal equipment and user involvement. In particular, point-of-care (POC) testing offers immense potential for many important applications including medical diagnosis, environmental monitoring, food safety, and biosecurity. When coupled with smartphones, POC systems can offer portability, ease of use and enhanced functionality while maintaining performance. This review article focuses on recent advancements and developments in smartphone-based POC systems within the last 6 years with an emphasis on cell and biomolecular detection. These devices typically comprise multiple components, such as detectors, sample processors, disposable chips, batteries, and software, which are integrated with a commercial smartphone. One of the most important aspects of developing these systems is the integration of these components onto a compact and lightweight platform that requires minimal power. Researchers have demonstrated several promising approaches employing various detection schemes and device configurations, and it is expected that further developments in biosensors, battery technology and miniaturized electronics will enable smartphone-based POC technologies to become more mainstream tools in the scientific and biomedical communities.

  13. Multi-Scale Initial Conditions For Cosmological Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Oliver; /KIPAC, Menlo Park; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg

    2011-11-04

    We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

  14. Global-Scale Hydrology: Simple Characterization of Complex Simulation

    Science.gov (United States)

    Koster, Randal D.

    1999-01-01

    Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.

  15. Huge-scale molecular dynamics simulation of multibubble nuclei

    KAUST Repository

    Watanabe, Hiroshi

    2013-12-01

    We have developed molecular dynamics codes for a short-range interaction potential that adopt both the flat-MPI and MPI/OpenMP hybrid parallelizations on the basis of a full domain decomposition strategy. Benchmark simulations involving up to 38.4 billion Lennard-Jones particles were performed on Fujitsu PRIMEHPC FX10, consisting of 4800 SPARC64 IXfx 1.848 GHz processors, at the Information Technology Center of the University of Tokyo, and a performance of 193 teraflops was achieved, which corresponds to a 17.0% execution efficiency. Cavitation processes were also simulated on PRIMEHPC FX10 and SGI Altix ICE 8400EX at the Institute of Solid State Physics of the University of Tokyo, which involved 1.45 billion and 22.9 million particles, respectively. Ostwald-like ripening was observed after the multibubble nuclei. Our results demonstrate that direct simulations of multiscale phenomena involving phase transitions from the atomic scale are possible and that the molecular dynamics method is a promising method that can be applied to petascale computers. © 2013 Elsevier B.V. All rights reserved.

  16. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  17. Asymmetric fluid criticality. II. Finite-size scaling for simulations.

    Science.gov (United States)

    Kim, Young C; Fisher, Michael E

    2003-10-01

    The vapor-liquid critical behavior of intrinsically asymmetric fluids is studied in finite systems of linear dimensions L focusing on periodic boundary conditions, as appropriate for simulations. The recently propounded "complete" thermodynamic (L--> infinity) scaling theory incorporating pressure mixing in the scaling fields as well as corrections to scaling [Phys. Rev. E 67, 061506 (2003)] is extended to finite L, initially in a grand canonical representation. The theory allows for a Yang-Yang anomaly in which, when L--> infinity, the second temperature derivative (d2musigma/dT2) of the chemical potential along the phase boundary musigmaT diverges when T-->Tc-. The finite-size behavior of various special critical loci in the temperature-density or (T,rho) plane, in particular, the k-inflection susceptibility loci and the Q-maximal loci--derived from QL(T,L) is identical with 2L/L where m is identical with rho-L--is carefully elucidated and shown to be of value in estimating Tc and rhoc. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte including an estimate of the correlation exponent nu that confirms Ising-type character. The treatment is extended to the canonical representation where further complications appear.

  18. Evaluation of the airway of the SimMan full-scale patient simulator

    DEFF Research Database (Denmark)

    Hesselfeldt, R; Kristensen, M S; Rasmussen, L S

    2005-01-01

    SimMan is a full-scale patient simulator, capable of simulating normal and pathological airways. The performance of SimMan has never been critically evaluated.......SimMan is a full-scale patient simulator, capable of simulating normal and pathological airways. The performance of SimMan has never been critically evaluated....

  19. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  20. Multi-scale imaging and elastic simulation of carbonates

    Science.gov (United States)

    Faisal, Titly Farhana; Awedalkarim, Ahmed; Jouini, Mohamed Soufiane; Jouiad, Mustapha; Chevalier, Sylvie; Sassi, Mohamed

    2016-05-01

    for this current unresolved phase is important. In this work we take a multi-scale imaging approach by first extracting a smaller 0.5" core and scanning at approx 13 µm, then further extracting a 5mm diameter core scanned at 5 μm. From this last scale, region of interests (containing unresolved areas) are identified for scanning at higher resolutions using Focalised Ion Beam (FIB/SEM) scanning technique reaching 50 nm resolution. Numerical simulation is run on such a small unresolved section to obtain a better estimate of the effective moduli which is then used as input for simulations performed using CT-images. Results are compared with expeirmental acoustic test moduli obtained also at two scales: 1.5" and 0.5" diameter cores.

  1. Convective aggregation in realistic convective-scale simulations

    Science.gov (United States)

    Holloway, Christopher E.

    2017-06-01

    To investigate the real-world relevance of idealized-model convective self-aggregation, five 15 day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibrium. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy shows that control runs have significant positive contributions to organization from radiation and negative contributions from surface fluxes and transport, similar to idealized runs once they become aggregated. Despite identical lateral boundary conditions for all experiments in each case, systematic differences in mean column water vapor (CWV), CWV distribution shape, and CWV autocorrelation length scale are found between the different sensitivity runs, particularly for those without interactive radiation, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations (although the organization of precipitation shows less sensitivity to interactive radiation). The magnitudes and signs of these systematic differences are consistent with a rough equilibrium between (1) equalization due to advection from the lateral boundaries and (2) disaggregation due to the absence of interactive radiation, implying disaggregation rates comparable to those in idealized runs with aggregated initial conditions and noninteractive radiation. This points to a plausible similarity in the way that radiation feedbacks maintain aggregated convection in both idealized simulations and the real world.Plain Language SummaryUnderstanding the processes that lead to the organization of tropical rainstorms is an important challenge for weather

  2. Electron-correlated fragment-molecular-orbital calculations for biomolecular and nano systems.

    Science.gov (United States)

    Tanaka, Shigenori; Mochizuki, Yuji; Komeiji, Yuto; Okiyama, Yoshio; Fukuzawa, Kaori

    2014-06-14

    Recent developments in the fragment molecular orbital (FMO) method for theoretical formulation, implementation, and application to nano and biomolecular systems are reviewed. The FMO method has enabled ab initio quantum-mechanical calculations for large molecular systems such as protein-ligand complexes at a reasonable computational cost in a parallelized way. There have been a wealth of application outcomes from the FMO method in the fields of biochemistry, medicinal chemistry and nanotechnology, in which the electron correlation effects play vital roles. With the aid of the advances in high-performance computing, the FMO method promises larger, faster, and more accurate simulations of biomolecular and related systems, including the descriptions of dynamical behaviors in solvent environments. The current status and future prospects of the FMO scheme are addressed in these contexts.

  3. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    Science.gov (United States)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  4. Modeling and Simulation of a lab-scale Fluidised Bed

    Directory of Open Access Journals (Sweden)

    Britt Halvorsen

    2002-04-01

    Full Text Available The flow behaviour of a lab-scale fluidised bed with a central jet has been simulated. The study has been performed with an in-house computational fluid dynamics (CFD model named FLOTRACS-MP-3D. The CFD model is based on a multi-fluid Eulerian description of the phases, where the kinetic theory for granular flow forms the basis for turbulence modelling of the solid phases. A two-dimensional Cartesian co-ordinate system is used to describe the geometry. This paper discusses whether bubble formation and bed height are influenced by coefficient of restitution, drag model and number of solid phases. Measurements of the same fluidised bed with a digital video camera are performed. Computational results are compared with the experimental results, and the discrepancies are discussed.

  5. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    Science.gov (United States)

    Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.

    2016-09-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.

  6. Design and Control of Full Scale Wave Energy Simulator System

    DEFF Research Database (Denmark)

    Pedersen, Henrik C.; Hansen, Anders Hedegaard; Hansen, Rico Hjerm

    2012-01-01

    For wave energy to become feasible it is a requirement that the efficiency and reliability of the power take-off (PTO) systems are significantly improved. The cost of installing and testing PTO-systems at sea are however very high, and the focus of the current paper is therefore on the design...... of a full scale wave simulator for testing PTO-systems for point absorbers. The main challenge is here to design a system, which mimics the behavior of a wave when interacting with a given PTO-system. The paper includes a description of the developed system, located at Aalborg University......, and the considerations behind the design. Based on the description a model of the system is presented, which, along with a description of the wave theory applied, makes the foundation for the control strategy. The objective of the control strategy is to emulate not only the wave behavior, but also the dynamic wave...

  7. Molecular Dynamics Simulations for Resolving Scaling Laws of Polyethylene Melts

    Directory of Open Access Journals (Sweden)

    Kazuaki Z. Takahashi

    2017-01-01

    Full Text Available Long-timescale molecular dynamics simulations were performed to estimate the actual physical nature of a united-atom model of polyethylene (PE. Several scaling laws for representative polymer properties are compared to theoretical predictions. Internal structure results indicate a clear departure from theoretical predictions that assume ideal chain statics. Chain motion deviates from predictions that assume ideal motion of short chains. With regard to linear viscoelasticity, the presence or absence of entanglements strongly affects the duration of the theoretical behavior. Overall, the results indicate that Gaussian statics and dynamics are not necessarily established for real atomistic models of PE. Moreover, the actual physical nature should be carefully considered when using atomistic models for applications that expect typical polymer behaviors.

  8. Criteria for Scaled Laboratory Simulations of Astrophysical MHD Phenomena

    International Nuclear Information System (INIS)

    Ryutov, D. D.; Drake, R. P.; Remington, B. A.

    2000-01-01

    We demonstrate that two systems described by the equations of the ideal magnetohydrodynamics (MHD) evolve similarly, if the initial conditions are geometrically similar and certain scaling relations hold. The thermodynamic properties of the gas must be such that the internal energy density is proportional to the pressure. The presence of the shocks is allowed. We discuss the applicability conditions of the ideal MHD and demonstrate that they are satisfied with a large margin both in a number of astrophysical objects, and in properly designed simulation experiments with high-power lasers. This allows one to perform laboratory experiments whose results can be used for quantitative interpretation of various effects of astrophysical MHD. (c) 2000 The American Astronomical Society

  9. Theory-based transport simulation of tokamaks: density scaling

    International Nuclear Information System (INIS)

    Ghanem, E.S.; Kinsey, J.; Singer, C.; Bateman, G.

    1992-01-01

    There has been a sizeable amount of work in the past few years using theoretically based flux-surface-average transport models to simulate various types of experimental tokamak data. Here we report two such studies, concentrating on the response of the plasma to variation of the line averaged electron density. The first study reported here uses a transport model described by Ghanem et al. to examine the response of global energy confinement time in ohmically heated discharges. The second study reported here uses a closely related and more recent transport model described by Bateman to examine the response of temperature profiles to changes in line-average density in neutral-beam-heated discharges. Work on developing a common theoretical model for these and other scaling experiments is in progress. (author) 5 refs., 2 figs

  10. Contact area of rough spheres: Large scale simulations and simple scaling laws

    Science.gov (United States)

    Pastewka, Lars; Robbins, Mark O.

    2016-05-01

    We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the whole range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.

  11. Contact area of rough spheres: Large scale simulations and simple scaling laws

    Energy Technology Data Exchange (ETDEWEB)

    Pastewka, Lars, E-mail: lars.pastewka@kit.edu [Institute for Applied Materials & MicroTribology Center muTC, Karlsruhe Institute of Technology, Engelbert-Arnold-Straße 4, 76131 Karlsruhe (Germany); Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218 (United States); Robbins, Mark O., E-mail: mr@pha.jhu.edu [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218 (United States)

    2016-05-30

    We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the whole range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.

  12. Unique temporal and spatial biomolecular emission profile on individual zinc oxide nanorods

    Science.gov (United States)

    Singh, Manpreet; Song, Sheng; Hahm, Jong-In

    2013-12-01

    Zinc oxide nanorods (ZnO NRs) have emerged in recent years as extremely useful, optical signal-enhancing platforms in DNA and protein detection. Although the use of ZnO NRs in biodetection has been demonstrated so far in systems involving many ZnO NRs per detection element, their future applications will likely take place in a miniaturized setting while exploiting single ZnO NRs in a low-volume, high-throughput bioanalysis. In this paper, we investigate temporal and spatial characteristics of the biomolecular fluorescence on individual ZnO NR systems. Quantitative and qualitative examinations of the biomolecular intensity and photostability are carried out as a function of two important criteria, the time and position along the long axis (length) of NRs. Photostability profiles are also measured with respect to the position on NRs and compared to those characteristics of biomolecules on polymeric control platforms. Unlike the uniformly distributed signal observed on the control platforms, both the fluorescence intensity and photostability are position-dependent on individual ZnO NRs. We have identified a unique phenomenon of highly localized, fluorescence intensification on the nanorod ends (FINE) of well-characterized, individual ZnO nanostructures. When compared to the polymeric controls, the biomolecular fluorescence intensity and photostability are determined to be higher on individual ZnO NRs regardless of the position on NRs. We have also carried out finite-difference time-domain simulations the results of which are in good agreement with the observed FINE. The outcomes of our investigation will offer a much needed basis for signal interpretation for biodetection devices and platforms consisting of single ZnO NRs and, at the same time, contribute significantly to provide insight in understanding the biomolecular fluorescence observed from ZnO NR ensemble-based systems.Zinc oxide nanorods (ZnO NRs) have emerged in recent years as extremely useful, optical

  13. On the consistency of scale among experiments, theory, and simulation

    Science.gov (United States)

    McClure, James E.; Dye, Amanda L.; Miller, Cass T.; Gray, William G.

    2017-02-01

    As a tool for addressing problems of scale, we consider an evolving approach known as the thermodynamically constrained averaging theory (TCAT), which has broad applicability to hydrology. We consider the case of modeling of two-fluid-phase flow in porous media, and we focus on issues of scale as they relate to various measures of pressure, capillary pressure, and state equations needed to produce solvable models. We apply TCAT to perform physics-based data assimilation to understand how the internal behavior influences the macroscale state of two-fluid porous medium systems. A microfluidic experimental method and a lattice Boltzmann simulation method are used to examine a key deficiency associated with standard approaches. In a hydrologic process such as evaporation, the water content will ultimately be reduced below the irreducible wetting-phase saturation determined from experiments. This is problematic since the derived closure relationships cannot predict the associated capillary pressures for these states. We demonstrate that the irreducible wetting-phase saturation is an artifact of the experimental design, caused by the fact that the boundary pressure difference does not approximate the true capillary pressure. Using averaging methods, we compute the true capillary pressure for fluid configurations at and below the irreducible wetting-phase saturation. Results of our analysis include a state function for the capillary pressure expressed as a function of fluid saturation and interfacial area.

  14. Uncertainties in the simulation of groundwater recharge at different scales

    Directory of Open Access Journals (Sweden)

    H. Bogena

    2005-01-01

    Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.

  15. Biomolecular Markers in Cancer of the Tongue

    Directory of Open Access Journals (Sweden)

    Daris Ferrari

    2009-01-01

    Full Text Available The incidence of tongue cancer is increasing worldwide, and its aggressiveness remains high regardless of treatment. Genetic changes and the expression of abnormal proteins have been frequently reported in the case of head and neck cancers, but the little information that has been published concerning tongue tumours is often contradictory. This review will concentrate on the immunohistochemical expression of biomolecular markers and their relationships with clinical behaviour and prognosis. Most of these proteins are associated with nodal stage, tumour progression and metastases, but there is still controversy concerning their impact on disease-free and overall survival, and treatment response. More extensive clinical studies are needed to identify the patterns of molecular alterations and the most reliable predictors in order to develop tailored anti-tumour strategies based on the targeting of hypoxia markers, vascular and lymphangiogenic factors, epidermal growth factor receptors, intracytoplasmatic signalling and apoptosis.

  16. Micro- and nanodevices integrated with biomolecular probes.

    Science.gov (United States)

    Alapan, Yunus; Icoz, Kutay; Gurkan, Umut A

    2015-12-01

    Understanding how biomolecules, proteins and cells interact with their surroundings and other biological entities has become the fundamental design criterion for most biomedical micro- and nanodevices. Advances in biology, medicine, and nanofabrication technologies complement each other and allow us to engineer new tools based on biomolecules utilized as probes. Engineered micro/nanosystems and biomolecules in nature have remarkably robust compatibility in terms of function, size, and physical properties. This article presents the state of the art in micro- and nanoscale devices designed and fabricated with biomolecular probes as their vital constituents. General design and fabrication concepts are presented and three major platform technologies are highlighted: microcantilevers, micro/nanopillars, and microfluidics. Overview of each technology, typical fabrication details, and application areas are presented by emphasizing significant achievements, current challenges, and future opportunities. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. EON: software for long time simulations of atomic scale systems

    Science.gov (United States)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  18. DNA algorithms of implementing biomolecular databases on a biological computer.

    Science.gov (United States)

    Chang, Weng-Long; Vasilakos, Athanasios V

    2015-01-01

    In this paper, DNA algorithms are proposed to perform eight operations of relational algebra (calculus), which include Cartesian product, union, set difference, selection, projection, intersection, join, and division, on biomolecular relational databases.

  19. DARHT Axis-I Diode Simulations II: Geometrical Scaling

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Carl A. Jr. [Los Alamos National Laboratory

    2012-06-14

    Flash radiography of large hydrodynamic experiments driven by high explosives is a venerable diagnostic technique in use at many laboratories. Many of the largest hydrodynamic experiments study mockups of nuclear weapons, and are often called hydrotests for short. The dual-axis radiography for hydrodynamic testing (DARHT) facility uses two electron linear-induction accelerators (LIA) to produce the radiographic source spots for perpendicular views of a hydrotest. The first of these LIAs produces a single pulse, with a fixed {approx}60-ns pulsewidth. The second axis LIA produces as many as four pulses within 1.6-{micro}s, with variable pulsewidths and separation. There are a wide variety of hydrotest geometries, each with a unique radiographic requirement, so there is a need to adjust the radiographic dose for the best images. This can be accomplished on the second axis by simply adjusting the pulsewidths, but is more problematic on the first axis. Changing the beam energy or introducing radiation attenuation also changes the spectrum, which is undesirable. Moreover, using radiation attenuation introduces significant blur, increasing the effective spot size. The dose can also be adjusted by changing the beam kinetic energy. This is a very sensitive method, because the dose scales as the {approx}2.8 power of the energy, but it would require retuning the accelerator. This leaves manipulating the beam current as the best means for adjusting the dose, and one way to do this is to change the size of the cathode. This method has been proposed, and is being tested. This article describes simulations undertaken to develop scaling laws for use as design tools in changing the Axis-1 beam current by changing the cathode size.

  20. Simulation of fatigue crack growth under large scale yielding conditions

    Science.gov (United States)

    Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann

    2010-07-01

    A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.

  1. Slurry spray distribution within a simulated laboratory scale spray dryer

    International Nuclear Information System (INIS)

    Bertone, P.C.

    1979-01-01

    It was found that the distribution of liquid striking the sides of a simulated room temperature spray dryer was not significantly altered by the choice of nozles, nor by a variation in nozzle operating conditions. Instead, it was found to be a function of the spray dryer's configuration. A cocurrent flow of air down the drying cylinder, not possible with PNL's closed top, favorably altered the spray distribution by both decreasing the amount of liquid striking the interior of the cylinder from 72 to 26% of the feed supplied, and by shifting the zone of maximum impact from 1.0 to 1.7 feet from the nozzle. These findings led to the redesign of the laboratory scale spray dryer to be tested at the Savannah River Plant. The diameter of the drying chamber was increased from 5 to 8 inches, and a cocurrent flow of air was established with a closed recycle. Finally, this investigation suggested a drying scheme which offers all the advantages of spray drying without many of its limitations

  2. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)

    2017-12-04

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  3. Biomolecular surface construction by PDE transform.

    Science.gov (United States)

    Zheng, Qiong; Yang, Siyang; Wei, Guo-Wei

    2012-03-01

    This work proposes a new framework for the surface generation based on the partial differential equation (PDE) transform. The PDE transform has recently been introduced as a general approach for the mode decomposition of images, signals, and data. It relies on the use of arbitrarily high-order PDEs to achieve the time-frequency localization, control the spectral distribution, and regulate the spatial resolution. The present work provides a new variational derivation of high-order PDE transforms. The fast Fourier transform is utilized to accomplish the PDE transform so as to avoid stringent stability constraints in solving high-order PDEs. As a consequence, the time integration of high-order PDEs can be done efficiently with the fast Fourier transform. The present approach is validated with a variety of test examples in two-dimensional and three-dimensional settings. We explore the impact of the PDE transform parameters, such as the PDE order and propagation time, on the quality of resulting surfaces. Additionally, we utilize a set of 10 proteins to compare the computational efficiency of the present surface generation method and a standard approach in Cartesian meshes. Moreover, we analyze the present method by examining some benchmark indicators of biomolecular surface, that is, surface area, surface-enclosed volume, solvation free energy, and surface electrostatic potential. A test set of 13 protein molecules is used in the present investigation. The electrostatic analysis is carried out via the Poisson-Boltzmann equation model. To further demonstrate the utility of the present PDE transform-based surface method, we solve the Poisson-Nernst-Planck equations with a PDE transform surface of a protein. Second-order convergence is observed for the electrostatic potential and concentrations. Finally, to test the capability and efficiency of the present PDE transform-based surface generation method, we apply it to the construction of an excessively large biomolecule, a

  4. An efficient non hydrostatic dynamical care far high-resolution simulations down to the urban scale

    International Nuclear Information System (INIS)

    Bonaventura, L.; Cesari, D.

    2005-01-01

    Numerical simulations of idealized stratified flows aver obstacles at different spatial scales demonstrate the very general applicability and the parallel efficiency of a new non hydrostatic dynamical care far simulation of mesoscale flows aver complex terrain

  5. Verification of Gyrokinetic Particle of Turbulent Simulation of Device Size Scaling Transport

    Institute of Scientific and Technical Information of China (English)

    LIN Zhihong; S. ETHIER; T. S. HAHM; W. M. TANG

    2012-01-01

    Verification and historical perspective are presented on the gyrokinetic particle simulations that discovered the device size scaling of turbulent transport and indentified the geometry model as the source of the long-standing disagreement between gyrokinetic particle and continuum simulations.

  6. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  7. Simulating large-scale spiking neuronal networks with NEST

    OpenAIRE

    Schücker, Jannis; Eppler, Jochen Martin

    2014-01-01

    The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...

  8. Integrated Spintronic Platforms for Biomolecular Recognition Detection

    Science.gov (United States)

    Martins, V. C.; Cardoso, F. A.; Loureiro, J.; Mercier, M.; Germano, J.; Cardoso, S.; Ferreira, R.; Fonseca, L. P.; Sousa, L.; Piedade, M. S.; Freitas, P. P.

    2008-06-01

    This paper covers recent developments in magnetoresistive based biochip platforms fabricated at INESC-MN, and their application to the detection and quantification of pathogenic waterborn microorganisms in water samples for human consumption. Such platforms are intended to give response to the increasing concern related to microbial contaminated water sources. The presented results concern the development of biological active DNA chips and protein chips and the demonstration of the detection capability of the present platforms. Two platforms are described, one including spintronic sensors only (spin-valve based or magnetic tunnel junction based), and the other, a fully scalable platform where each probe site consists of a MTJ in series with a thin film diode (TFD). Two microfluidic systems are described, for cell separation and concentration, and finally, the read out and control integrated electronics are described, allowing the realization of bioassays with a portable point of care unit. The present platforms already allow the detection of complementary biomolecular target recognition with 1 pM concentration.

  9. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  10. A Group Simulation of the Development of the Geologic Time Scale.

    Science.gov (United States)

    Bennington, J. Bret

    2000-01-01

    Explains how to demonstrate to students that the relative dating of rock layers is redundant. Uses two column diagrams to simulate stratigraphic sequences from two different geological time scales and asks students to complete the time scale. (YDS)

  11. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  12. Verification of Simulation Results Using Scale Model Flight Test Trajectories

    National Research Council Canada - National Science Library

    Obermark, Jeff

    2004-01-01

    .... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...

  13. Convective aggregation in realistic convective-scale simulations

    OpenAIRE

    Holloway, Christopher E.

    2017-01-01

    To investigate the real-world relevance of idealized-model convective self-aggregation, five 15-day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibriu...

  14. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  15. Biomolecular ions in superfluid helium nanodroplets

    International Nuclear Information System (INIS)

    Gonzalez Florez, Ana Isabel

    2016-01-01

    The function of a biological molecule is closely related to its structure. As a result, understanding and predicting biomolecular structure has become the focus of an extensive field of research. However, the investigation of molecular structure can be hampered by two main difficulties: the inherent complications that may arise from studying biological molecules in their native environment, and the potential congestion of the experimental results as a consequence of the large number of degrees of freedom present in these molecules. In this work, a new experimental setup has been developed and established in order to overcome the afore mentioned limitations combining structure-sensitive gas-phase methods with superfluid helium droplets. First, biological molecules are ionised and brought into the gas phase, often referred to as a clean-room environment, where the species of interest are isolated from their surroundings and, thus, intermolecular interactions are absent. The mass-to-charge selected biomolecules are then embedded inside clusters of superfluid helium with an equilibrium temperature of ∝0.37 K. As a result, the internal energy of the molecules is lowered, thereby reducing the number of populated quantum states. Finally, the local hydrogen bonding patterns of the molecules are investigated by probing specific vibrational modes using the Fritz Haber Institute's free electron laser as a source of infrared radiation. Although the structure of a wide variety of molecules has been studied making use of the sub-Kelvin environment provided by superfluid helium droplets, the suitability of this method for the investigation of biological molecular ions was still unclear. However, the experimental results presented in this thesis demonstrate the applicability of this experimental approach in order to study the structure of intact, large biomolecular ions and the first vibrational spectrum of the protonated pentapeptide leu-enkephalin embedded in helium

  16. Biomolecular ions in superfluid helium nanodroplets

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Florez, Ana Isabel

    2016-07-01

    The function of a biological molecule is closely related to its structure. As a result, understanding and predicting biomolecular structure has become the focus of an extensive field of research. However, the investigation of molecular structure can be hampered by two main difficulties: the inherent complications that may arise from studying biological molecules in their native environment, and the potential congestion of the experimental results as a consequence of the large number of degrees of freedom present in these molecules. In this work, a new experimental setup has been developed and established in order to overcome the afore mentioned limitations combining structure-sensitive gas-phase methods with superfluid helium droplets. First, biological molecules are ionised and brought into the gas phase, often referred to as a clean-room environment, where the species of interest are isolated from their surroundings and, thus, intermolecular interactions are absent. The mass-to-charge selected biomolecules are then embedded inside clusters of superfluid helium with an equilibrium temperature of ∝0.37 K. As a result, the internal energy of the molecules is lowered, thereby reducing the number of populated quantum states. Finally, the local hydrogen bonding patterns of the molecules are investigated by probing specific vibrational modes using the Fritz Haber Institute's free electron laser as a source of infrared radiation. Although the structure of a wide variety of molecules has been studied making use of the sub-Kelvin environment provided by superfluid helium droplets, the suitability of this method for the investigation of biological molecular ions was still unclear. However, the experimental results presented in this thesis demonstrate the applicability of this experimental approach in order to study the structure of intact, large biomolecular ions and the first vibrational spectrum of the protonated pentapeptide leu-enkephalin embedded in helium

  17. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  18. Combining hardware and simulation for datacenter scaling studies

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Pilimon, Artur; Thrane, Jakob

    2017-01-01

    and simulation to illustrate the scalability and performance of datacenter networks. We simulate a Datacenter network and interconnect it with real world traffic generation hardware. Analysis of the introduced packet conversion and virtual queueing delays shows that the conversion efficiency is at the order...

  19. Simulation of large scale air detritiation operations by computer modeling and bench-scale experimentation

    International Nuclear Information System (INIS)

    Clemmer, R.G.; Land, R.H.; Maroni, V.A.; Mintz, J.M.

    1978-01-01

    Although some experience has been gained in the design and construction of 0.5 to 5 m 3 /s air-detritiation systems, little information is available on the performance of these systems under realistic conditions. Recently completed studies at ANL have attempted to provide some perspective on this subject. A time-dependent computer model was developed to study the effects of various reaction and soaking mechanisms that could occur in a typically-sized fusion reactor building (approximately 10 5 m 3 ) following a range of tritium releases (2 to 200 g). In parallel with the computer study, a small (approximately 50 liter) test chamber was set up to investigate cleanup characteristics under conditions which could also be simulated with the computer code. Whereas results of computer analyses indicated that only approximately 10 -3 percent of the tritium released to an ambient enclosure should be converted to tritiated water, the bench-scale experiments gave evidence of conversions to water greater than 1%. Furthermore, although the amounts (both calculated and observed) of soaked-in tritium are usually only a very small fraction of the total tritium release, the soaked tritium is significant, in that its continuous return to the enclosure extends the cleanup time beyond the predicted value in the absence of any soaking mechanisms

  20. Simulation in full-scale mock-ups: an ergonomics evaluation method?

    DEFF Research Database (Denmark)

    Andersen, Simone Nyholm; Broberg, Ole

    2014-01-01

    This paper presents and exploratory study of four simulation sessions in full-scale mock-ups of future hospital facilities.......This paper presents and exploratory study of four simulation sessions in full-scale mock-ups of future hospital facilities....

  1. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  2. Long time scale simulation of a grain boundary in copper

    DEFF Research Database (Denmark)

    Pedersen, A.; Henkelman, G.; Schiøtz, Jakob

    2009-01-01

    A general, twisted and tilted, grain boundary in copper has been simulated using the adaptive kinetic Monte Carlo method to study the atomistic structure of the non-crystalline region and the mechanism of annealing events that occur at low temperature. The simulated time interval spanned 67 mu s...... was also observed. In the final low-energy configurations, the thickness of the region separating the crystalline grains corresponds to just one atomic layer, in good agreement with reported experimental observations. The simulated system consists of 1307 atoms and atomic interactions were described using...

  3. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu; Duan, Benchun; Taylor, Valerie

    2011-01-01

    , such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular

  4. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  5. Scaling for integral simulation of thermal-hydraulic phenomena in SBWR during LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, M.; Revankar, S.T.; Dowlati, R [Purdue Univ., West Layfayette, IN (United States)] [and others

    1995-09-01

    A scaling study has been conducted for simulation of thermal-hydraulic phenomena in the Simplified Boiling Water Reactor (SBWR) during a loss of coolant accident. The scaling method consists of a three-level scaling approach. The integral system scaling (global scaling or top down approach) consists of two levels, the integral response function scaling which forms the first level, and the control volume and boundary flow scaling which forms the second level. The bottom up approach is carried out by local phenomena scaling which forms the third level scaling. Based on this scaling study the design of the model facility called Purdue University Multi-Dimensional Integral Test Assembly (PUMA) has been carried out. The PUMA facility has 1/4 height and 1/100 area ratio scaling, corresponding to the volume scaling of 1/400. The PUMA power scaling based on the integral scaling is 1/200. The present scaling method predicts that PUMA time scale will be one-half that of the SBWR. The system pressure for PUMA is full scale, therefore, a prototypic pressure is maintained. PUMA is designed to operate at and below 1.03 MPa (150 psi), which allows it to simulate the prototypic SBWR accident conditions below 1.03 MPa (150 psi). The facility includes models for all components of importance.

  6. Persistence of Initial Conditions in Continental Scale Air Quality Simulations

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains the data used in Figures 1 – 6 and Table 2 of the technical note "Persistence of Initial Conditions in Continental Scale Air Quality...

  7. Manufacturing Process Simulation of Large-Scale Cryotanks

    Science.gov (United States)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  8. Scale-up and optimization of biohydrogen production reactor from laboratory-scale to industrial-scale on the basis of computational fluid dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)

    2010-10-15

    The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)

  9. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    Science.gov (United States)

    Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.

    2018-03-01

    This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields

  10. A Framework for Parallel Numerical Simulations on Multi-Scale Geometries

    KAUST Repository

    Varduhn, Vasco

    2012-06-01

    In this paper, an approach on performing numerical multi-scale simulations on fine detailed geometries is presented. In particular, the focus lies on the generation of sufficient fine mesh representations, whereas a resolution of dozens of millions of voxels is inevitable in order to sufficiently represent the geometry. Furthermore, the propagation of boundary conditions is investigated by using simulation results on the coarser simulation scale as input boundary conditions on the next finer scale. Finally, the applicability of our approach is shown on a two-phase simulation for flooding scenarios in urban structures running from a city wide scale to a fine detailed in-door scale on feature rich building geometries. © 2012 IEEE.

  11. Representative Sinusoids for Hepatic Four-Scale Pharmacokinetics Simulations.

    Directory of Open Access Journals (Sweden)

    Lars Ole Schwen

    Full Text Available The mammalian liver plays a key role for metabolism and detoxification of xenobiotics in the body. The corresponding biochemical processes are typically subject to spatial variations at different length scales. Zonal enzyme expression along sinusoids leads to zonated metabolization already in the healthy state. Pathological states of the liver may involve liver cells affected in a zonated manner or heterogeneously across the whole organ. This spatial heterogeneity, however, cannot be described by most computational models which usually consider the liver as a homogeneous, well-stirred organ. The goal of this article is to present a methodology to extend whole-body pharmacokinetics models by a detailed liver model, combining different modeling approaches from the literature. This approach results in an integrated four-scale model, from single cells via sinusoids and the organ to the whole organism, capable of mechanistically representing metabolization inhomogeneity in livers at different spatial scales. Moreover, the model shows circulatory mixing effects due to a delayed recirculation through the surrounding organism. To show that this approach is generally applicable for different physiological processes, we show three applications as proofs of concept, covering a range of species, compounds, and diseased states: clearance of midazolam in steatotic human livers, clearance of caffeine in mouse livers regenerating from necrosis, and a parameter study on the impact of different cell entities on insulin uptake in mouse livers. The examples illustrate how variations only discernible at the local scale influence substance distribution in the plasma at the whole-body level. In particular, our results show that simultaneously considering variations at all relevant spatial scales may be necessary to understand their impact on observations at the organism scale.

  12. Dislocations and elementary processes of plasticity in FCC metals: atomic scale simulations

    International Nuclear Information System (INIS)

    Rodney, D.

    2000-01-01

    We present atomic-scale simulations of two elementary processes of FCC crystal plasticity. The first study consists in the simulation by molecular dynamics, in a nickel crystal, of the interactions between an edge dislocation and glissile interstitial loops of the type that form under irradiation in displacement cascades. The simulations show various atomic-scale interaction processes leading to the absorption and drag of the loops by the dislocation. These reactions certainly contribute to the formation of the 'clear bands' observed in deformed irradiated materials. The simulations also allow to study quantitatively the role of the glissile loops in irradiation hardening. In particular, dislocation unpinning stresses for certain pinning mechanisms are evaluated from the simulations. The second study consists first in the generalization in three dimensions of the quasi-continuum method (QCM), a multi-scale simulation method which couples atomistic techniques and the finite element method. In the QCM, regions close to dislocation cores are simulated at the atomic-scale while the rest of the crystal is simulated with a lower resolution by means of a discretization of the displacement fields using the finite element method. The QCM is then tested on the simulation of the formation and breaking of dislocation junctions in an aluminum crystal. Comparison of the simulations with an elastic model of dislocation junctions shows that the structure and strength of the junctions are dominated by elastic line tension effects, as is assumed in classical theories. (author)

  13. Comparison of scale analysis and numerical simulation for saturated zone convective mixing processes

    International Nuclear Information System (INIS)

    Oldenburg, C.M.

    1998-01-01

    Scale analysis can be used to predict a variety of quantities arising from natural systems where processes are described by partial differential equations. For example, scale analysis can be applied to estimate the effectiveness of convective missing on the dilution of contaminants in groundwater. Scale analysis involves substituting simple quotients for partial derivatives and identifying and equating the dominant terms in an order-of-magnitude sense. For free convection due to sidewall heating of saturated porous media, scale analysis shows that vertical convective velocity in the thermal boundary layer region is proportional to the Rayleigh number, horizontal convective velocity is proportional to the square root of the Rayleigh number, and thermal boundary layer thickness is proportional to the inverse square root of the Rayleigh number. These scale analysis estimates are corroborated by numerical simulations of an idealized system. A scale analysis estimate of mixing time for a tracer mixing by hydrodynamic dispersion in a convection cell also agrees well with numerical simulation for two different Rayleigh numbers. Scale analysis for the heating-from-below scenario produces estimates of maximum velocity one-half as large as the sidewall case. At small values of the Rayleigh number, this estimate is confirmed by numerical simulation. For larger Rayleigh numbers, simulation results suggest maximum velocities are similar to the sidewall heating scenario. In general, agreement between scale analysis estimates and numerical simulation results serves to validate the method of scale analysis. Application is to radioactive repositories

  14. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  15. Modeling and Simulation in Tribology Across Scales : an Overview

    NARCIS (Netherlands)

    Vakis, Antonis I.; Yastrebov, V.A.; Scheibert, J.; Nicola, L; Dini, D.; Minfray, C.; Almqvist, A.; Paggi, M.; Lee, S.; Limbert, G.; Molinari, J.F.; Anciaux, G.; Echeverri Restrepo, S.; Papangelo, A.; Cammarata, A.; Nicolini, P.; Aghababaei, R.; Putignano, C.; Stupkiewicz, S.; Lengiewicz, J.; Costagliola, G.; Bosia, F.; Guarino, R.; Pugno, N.M.; Carbone, G.; Müser, Martin H.; Ciavarella, M.

    2018-01-01

    This review summarizes recent advances in the area of tribology based on the outcome of a Lorentz Center workshop surveying various physical, chemical and mechanical phenomena across scales. Among the main themes discussed were those of rough surface representations, the breakdown of continuum

  16. Modeling and simulation in tribology across scales: An overview

    DEFF Research Database (Denmark)

    Vakis, A.I.; Yastrebov, V.A.; Scheibert, J.

    2018-01-01

    theories at the nano- and micro-scales, as well as multiscale and multiphysics aspects for analytical and computational models relevant to applications spanning a variety of sectors, from automotive to biotribology and nanotechnology. Significant effort is still required to account for complementary...

  17. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition.

    Directory of Open Access Journals (Sweden)

    Xiliang Zheng

    2015-04-01

    Full Text Available We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity, the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.

  18. Molecular-dynamics simulations of polymeric surfaces for biomolecular applications

    NARCIS (Netherlands)

    Muntean, S.A.

    2013-01-01

    In-vitro diagnostics plays a very important role in the present healthcare system. It consists of a large variety of medical devices designed to diagnose a medical condition by measuring a target molecule in a sample, such as blood or urine. In vitro is the latin term for in glass and refers here to

  19. The MARTINI force field : Coarse grained model for biomolecular simulations

    NARCIS (Netherlands)

    Marrink, Siewert J.; Risselada, H. Jelger; Yefimov, Serge; Tieleman, D. Peter; de Vries, Alex H.

    2007-01-01

    We present an improved and extended version of our coarse grained lipid model. The new version, coined the MARTINI force field, is parametrized in a systematic way, based on the reproduction of partitioning free energies between polar and apolar phases of a large number of chemical compounds. To

  20. From fuel cells to batteries: Synergies, scales and simulation methods

    OpenAIRE

    Bessler, Wolfgang G.

    2011-01-01

    The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...

  1. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  2. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  3. Full-scale retrieval of simulated buried transuranic waste

    International Nuclear Information System (INIS)

    Valentich, D.J.

    1993-09-01

    This report describes the results of a field test conducted to determine the effectiveness of using conventional type construction equipment for the retrieval of buried transuranic (TRU) waste. A cold (nonhazardous and nonradioactive) test pit (1,100 yd 3 volume) was constructed with boxes and drums filled with simulated waste materials, such as metal, plastic, wood, concrete, and sludge. Large objects, including truck beds, tanks, vaults, pipes, and beams, were also placed in the pit. These materials were intended to simulate the type of wastes found in TRU buried waste pits and trenches. A series of commercially available equipment items, such as excavators and tracked loaders outfitted with different end effectors, were used to remove the simulated waste. Work was performed from both the abovegrade and belowgrade positions. During the demonstration, a number of observations, measurements, and analyses were performed to determine which equipment was the most effective in removing the waste. The retrieval rates for the various excavation techniques were recorded. The inherent dust control capabilities of the excavation methods used were observed. The feasibility of teleoperating reading equipment was also addressed

  4. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  5. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    Science.gov (United States)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  6. The HADDOCK web server for data-driven biomolecular docking

    NARCIS (Netherlands)

    de Vries, S.J.|info:eu-repo/dai/nl/304837717; van Dijk, M.|info:eu-repo/dai/nl/325811113; Bonvin, A.M.J.J.|info:eu-repo/dai/nl/113691238

    2010-01-01

    Computational docking is the prediction or modeling of the three-dimensional structure of a biomolecular complex, starting from the structures of the individual molecules in their free, unbound form. HADDOC K is a popular docking program that takes a datadriven approach to docking, with support for

  7. Improvements to the APBS biomolecular solvation software suite.

    Science.gov (United States)

    Jurrus, Elizabeth; Engel, Dave; Star, Keith; Monson, Kyle; Brandi, Juan; Felberg, Lisa E; Brookes, David H; Wilson, Leighton; Chen, Jiahui; Liles, Karina; Chun, Minju; Li, Peter; Gohara, David W; Dolinsky, Todd; Konecny, Robert; Koes, David R; Nielsen, Jens Erik; Head-Gordon, Teresa; Geng, Weihua; Krasny, Robert; Wei, Guo-Wei; Holst, Michael J; McCammon, J Andrew; Baker, Nathan A

    2018-01-01

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that have provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses the three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this article, we discuss the models and capabilities that have recently been implemented within the APBS software package including a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory-based algorithm for determining pK a values, and an improved web-based visualization tool for viewing electrostatics. © 2017 The Protein Society.

  8. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  9. Biomolecular strategies for cell surface engineering

    Science.gov (United States)

    Wilson, John Tanner

    Islet transplantation has emerged as a promising cell-based therapy for the treatment of diabetes, but its clinical efficacy remains limited by deleterious host responses that underlie islet destruction. In this dissertation, we describe the assembly of ultrathin conformal coatings that confer molecular-level control over the composition and biophysicochemical properties of the islet surface with implications for improving islet engraftment. Significantly, this work provides novel biomolecular strategies for cell surface engineering with broad biomedical and biotechnological applications in cell-based therapeutics and beyond. Encapsulation of cells and tissue offers a rational approach for attenuating deleterious host responses towards transplanted cells, but a need exists to develop cell encapsulation strategies that minimize transplant volume. Towards this end, we endeavored to generate nanothin films of diverse architecture with tunable properties on the extracellular surface of individual pancreatic islets through a process of layer-by-layer (LbL) self assembly. We first describe the formation of poly(ethylene glycol) (PEG)-rich conformal coatings on islets via LbL self assembly of poly(L-lysine)-g-PEG(biotin) and streptavidin. Multilayer thin films conformed to the geometrically and chemically heterogeneous islet surface, and could be assembled without loss of islet viability or function. Significantly, coated islets performed comparably to untreated controls in a murine model of allogenic intraportal islet transplantation, and, to our knowledge, this is the first study to report in vivo survival and function of nanoencapsulated cells or cell aggregates. Based on these findings, we next postulated that structurally similar PLL-g-PEG copolymers comprised of shorter PEG grafts might be used to initiate and propagate the assembly of polyelectrolyte multilayer (PEM) films on pancreatic islets, while simultaneously preserving islet viability. Through control of PLL

  10. Modeling and simulation in tribology across scales: An overview

    DEFF Research Database (Denmark)

    Vakis, A.I.; Yastrebov, V.A.; Scheibert, J.

    2018-01-01

    This review summarizes recent advances in the area of tribology based on the outcome of a Lorentz Center workshop surveying various physical, chemical and mechanical phenomena across scales. Among the main themes discussed were those of rough surface representations, the breakdown of continuum...... nonlinear effects of plasticity, adhesion, friction, wear, lubrication and surface chemistry in tribological models. For each topic, we propose some research directions....

  11. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  12. Groundwater flow simulation on local scale. Setting boundary conditions of groundwater flow simulation on site scale model in the step 4

    International Nuclear Information System (INIS)

    Onoe, Hironori; Saegusa, Hiromitsu; Ohyama, Takuya

    2007-03-01

    Japan Atomic Energy Agency has been conducting a wide range of geoscientific research in order to build a foundation for multidisciplinary studies of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. Ongoing geoscientific research programs include the Regional Hydrogeological Study (RHS) project and Mizunami Underground Research Laboratory (MIU) project in the Tono region, Gifu Prefecture. The main goal of these projects is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological at several spatial scales. The RHS project is a Local scale study for understanding the groundwater flow system from the recharge area to the discharge area. The Surface-based Investigation Phase of the MIU project is a Site scale study for understanding the deep geological environment immediately surrounding the MIU construction site using a multiphase, iterative approach. In this study, the hydrogeological modeling and groundwater flow simulation on Local scale were carried out in order to set boundary conditions of the Site scale model based on the data obtained from surface-based investigations in the Step4 in Site scale of the MIU project. As a result of the study, boundary conditions for groundwater flow simulation on the Site scale model of the Step4 could be obtained. (author)

  13. Screening wells by multi-scale grids for multi-stage Markov Chain Monte Carlo simulation

    DEFF Research Database (Denmark)

    Akbari, Hani; Engsig-Karup, Allan Peter

    2018-01-01

    /production wells, aiming at accurate breakthrough capturing as well as above mentioned efficiency goals. However this short time simulation needs fine-scale structure of the geological model around wells and running a fine-scale model is not as cheap as necessary for screening steps. On the other hand applying...... it on a coarse-scale model declines important data around wells and causes inaccurate results, particularly accurate breakthrough capturing which is important for prediction applications. Therefore we propose a multi-scale grid which preserves the fine-scale model around wells (as well as high permeable regions...... and fractures) and coarsens rest of the field and keeps efficiency and accuracy for the screening well stage and coarse-scale simulation, as well. A discrete wavelet transform is used as a powerful tool to generate the desired unstructured multi-scale grid efficiently. Finally an accepted proposal on coarse...

  14. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed

    2016-06-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation domain and solve the governing equations. To speed up the MC simulations, we implemented a recently developed scheme that quickly generates MC Markov chains out of pre-computed ones, based on the reweighting and reconstruction algorithm. This method astonishingly reduces the required computational times by MC simulations from hours to seconds. To demonstrate the strength of the proposed coupling in terms of computational time efficiency and numerical accuracy in fluid properties, various numerical experiments covering different compressible single-phase flow scenarios were conducted. The novelty in the introduced scheme is in allowing an efficient coupling of the molecular scale and the Darcy\\'s one in reservoir simulators. This leads to an accurate description of thermodynamic behavior of the simulated reservoir fluids; consequently enhancing the confidence in the flow predictions in porous media.

  15. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  16. Large Scale Earth's Bow Shock with Northern IMF as Simulated by ...

    Indian Academy of Sciences (India)

    results with the available MHD simulations under same scaled solar wind. (SW) and (IMF) ... their effects in dissipating flow-energy, in heating matter, in accelerating particles to high, presumably ... such as hybrid models (Omidi et al. 2013 ...

  17. A study on a nano-scale materials simulation using a PC cluster

    International Nuclear Information System (INIS)

    Choi, Deok Kee; Ryu, Han Kyu

    2002-01-01

    Not a few scientists have paid attention to application of molecular dynamics to chemistry, biology and physics. With recent popularity of nano technology, nano-scale analysis has become a major subject in various engineering fields. A underlying nano scale analysis is based on classical molecular theories representing molecular dynamics. Based on Newton's law of motions of particles, the movement of each particles is to be determined by numerical integrations. As the size of computation is closely related with the number of molecules, materials simulation takes up huge amount of computer resources so that it is not until recent days that the application of molecular dynamics to materials simulations draw some attention from many researchers. Thanks to high-performance computers, materials simulation via molecular dynamics looks promising. In this study, a PC cluster consisting of multiple commodity PCs is established and nano scale materials simulations are carried out. Micro-sized crack propagation inside a nano material is displayed by the simulation

  18. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    International Nuclear Information System (INIS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-01-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations

  19. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    Energy Technology Data Exchange (ETDEWEB)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de [Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany)

    2015-05-21

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  20. Adequacy of power-to-volume scaling philosophy to simulate natural circulation in Integral Test Facilities

    International Nuclear Information System (INIS)

    Nayak, A.K.; Vijayan, P.K.; Saha, D.; Venkat Raj, V.; Aritomi, Masanori

    1998-01-01

    Theoretical and experimental investigations were carried out to study the adequacy of power-to-volume scaling philosophy for the simulation of natural circulation and to establish the scaling philosophy applicable for the design of the Integral Test Facility (ITF-AHWR) for the Indian Advanced Heavy Water Reactor (AHWR). The results indicate that a reduction in the flow channel diameter of the scaled facility as required by the power-to-volume scaling philosophy may affect the simulation of natural circulation behaviour of the prototype plants. This is caused by the distortions due to the inability to simulate the frictional resistance of the scaled facility. Hence, it is recommended that the flow channel diameter of the scaled facility should be as close as possible to the prototype. This was verified by comparing the natural circulation behaviour of a prototype 220 MWe Indian PHWR and its scaled facility (FISBE-1) designed based on power-to-volume scaling philosophy. It is suggested from examinations using a mathematical model and a computer code that the FISBE-1 simulates the steady state and the general trend of transient natural circulation behaviour of the prototype reactor adequately. Finally the proposed scaling method was applied for the design of the ITF-AHWR. (author)

  1. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  2. Numerical simulation of lubrication mechanisms at mesoscopic scale

    DEFF Research Database (Denmark)

    Hubert, C.; Bay, Niels; Christiansen, Peter

    2011-01-01

    The mechanisms of liquid lubrication in metal forming are studied at a mesoscopic scale, adopting a 2D sequential fluid-solid weak coupling approach earlier developed in the first author's laboratory. This approach involves two computation steps. The first one is a fully coupled fluid-structure F...... of pyramidal indentations. The tests are performed with variable reduction and drawing speed under controlled front and back tension forces. Visual observations through a transparent die of the fluid entrapment and escape from the cavities using a CCD camera show the mechanisms of Micro......PlastoHydroDynamic Lubrication (MPHDL) as well as cavity shrinkage due to lubricant compression and escape and strip deformation....

  3. Proceedings of joint meeting of the 6th simulation science symposium and the NIFS collaboration research 'large scale computer simulation'

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-03-01

    Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)

  4. Simulating atomic-scale phenomena on surfaces of unconventional superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Kreisel, Andreas; Andersen, Brian [Niels Bohr Institute (Denmark); Choubey, Peayush; Hirschfeld, Peter [Univ. of Florida (United States); Berlijn, Tom [CNMS and CSMD, Oak Ridge National Laboratory (United States)

    2016-07-01

    Interest in atomic scale effects in superconductors has increased because of two general developments: First, the discovery of new materials as the cuprate superconductors, heavy fermion and Fe-based superconductors where the coherence length of the cooper pairs is as small to be comparable to the lattice constant, rendering small scale effects important. Second, the experimental ability to image sub-atomic features using scanning-tunneling microscopy which allows to unravel numerous physical properties of the homogeneous system such as the quasi particle excitation spectra or various types of competing order as well as properties of local disorder. On the theoretical side, the available methods are based on lattice models restricting the spatial resolution of such calculations. In the present project we combine lattice calculations using the Bogoliubov-de Gennes equations describing the superconductor with wave function information containing sub-atomic resolution obtained from ab initio approaches. This allows us to calculate phenomena on surfaces of superconductors as directly measured in scanning tunneling experiments and therefore opens the possibility to identify underlying properties of these materials and explain observed features of disorder. It will be shown how this method applies to the cuprate material Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8} and a Fe based superconductor.

  5. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    Science.gov (United States)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  6. Small Reservoir Impact on Simulated Watershed-Scale Nutrient Yield

    Directory of Open Access Journals (Sweden)

    Shane J. Prochnow

    2007-01-01

    Full Text Available The soil and water assessment tool (SWAT is used to assess the influence of small upland reservoirs (PL566 on watershed nutrient yield. SWAT simulates the impact of collectively increasing and decreasing PL566 magnitudes (size parameters on the watershed. Totally removing PL566 reservoirs results in a 100% increase in total phosphorus and an 82% increase in total nitrogen, while a total maximum daily load (TMDL calling for a 50% reduction in total phosphorus can be achieved with a 500% increase in the magnitude of PL566s in the watershed. PL566 reservoirs capture agriculture pollution in surface flow, providing long-term storage of these constituents when they settle to the reservoir beds. A potential strategy to reduce future downstream nutrient loading is to enhance or construct new PL566 reservoirs in the upper basin to better capture agricultural runoff.

  7. Numerical simulations of a large scale oxy-coal burner

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Taeyoung [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group; Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Park, Sanghyun; Ryu, Changkook [Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Yang, Won [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group

    2013-07-01

    Oxy-coal combustion is one of promising carbon dioxide capture and storage (CCS) technologies that uses oxygen and recirculated CO{sub 2} as an oxidizer instead of air. Due to difference in physical properties between CO{sub 2} and N{sub 2}, the oxy-coal combustion requires development of burner and boiler based on fundamental understanding of the flame shape, temperature, radiation and heat flux. For design of a new oxy-coal combustion system, computational fluid dynamics (CFD) is an essential tool to evaluate detailed combustion characteristics and supplement experimental results. In this study, CFD analysis was performed to understand the combustion characteristics inside a tangential vane swirl type 30 MW coal burner for air-mode and oxy-mode operations. In oxy-mode operations, various compositions of primary and secondary oxidizers were assessed which depended on the recirculation ratio of flue gas. For the simulations, devolatilization of coal and char burnout by O{sub 2}, CO{sub 2} and H{sub 2}O were predicted with a Lagrangian particle tracking method considering size distribution of pulverized coal and turbulent dispersion. The radiative heat transfer was solved by employing the discrete ordinate method with the weighted sum of gray gases model (WSGGM) optimized for oxy-coal combustion. In the simulation results for oxy-model operation, the reduced swirl strength of secondary oxidizer increased the flame length due to lower specific volume of CO{sub 2} than N{sub 2}. The flame length was also sensitive to the flow rate of primary oxidizer. The oxidizer without N{sub 2} that reduces thermal NO{sub x} formation makes the NO{sub x} lower in oxy-mode than air-mode. The predicted results showed similar trends with measured temperature profiles for various oxidizer compositions. Further numerical investigations are required to improve the burner design combined with more detailed experimental results.

  8. Site scale groundwater flow in Olkiluoto - complementary simulations

    International Nuclear Information System (INIS)

    Loefman, J.

    2000-06-01

    This work comprises of the complementary simulations to the previous groundwater flow analysis at the Olkiluoto site. The objective is to study the effects of flow porosity, conceptual model for solute transport, fracture zones, land uplift and initial conditions on the results. The numerical simulations are carried out up to 10000 years into the future employing the same modelling approach and site-specific flow and transport model as in the previous work except for the differences in the case descriptions. The result quantities considered are the salinity and the driving force in the vicinity of the repository. The salinity field and the driving force are sensitive to the flow porosity and the conceptual model for solute transport. Ten-fold flow porosity and the dual-porosity approach retard the transport of solutes in the bedrock resulting in brackish groundwater conditions at the repository at 10000 years A.P. (in the previous work the groundwater in the repository turned into fresh). The higher driving forces can be attributed to the higher concentration gradients resulting from the opposite effects of the land uplift, which pushes fresh water deeper and deeper into the bedrock, and the higher flow porosity and the dual-porosity model, which retard the transport of solutes. The cases computed (unrealistically) without fracture zones and postglacial land uplift show that they both have effect on the results and can not be ignored in the coupled and transient groundwater flow analyses. The salinity field and the driving force are also sensitive to the initial salinity field especially at the beginning during the first 500 years A.P. The sensitivity will, however, diminish as soon as fresh water dilutes brackish and saline water and decreases the concentration gradients. Fresh water conditions result in also a steady state for the driving force in the repository area. (orig.)

  9. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: a multi-scale simulation analysis.

    Science.gov (United States)

    Knightes, C D; Golden, H E; Journey, C A; Davis, G M; Conrads, P A; Marvin-DiPasquale, M; Brigham, M E; Bradley, P M

    2014-04-01

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km(2)) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km(2) and 25 km(2)) and the encompassing watershed (79 km(2)). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport. Published by Elsevier Ltd.

  10. Theoretical and Numerical Properties of a Gyrokinetic Plasma: Issues Related to Transport Time Scale Simulation

    International Nuclear Information System (INIS)

    Lee, W.W.

    2003-01-01

    Particle simulation has played an important role for the recent investigations on turbulence in magnetically confined plasmas. In this paper, theoretical and numerical properties of a gyrokinetic plasma as well as its relationship with magnetohydrodynamics (MHD) are discussed with the ultimate aim of simulating microturbulence in transport time scale using massively parallel computers

  11. Scaling of two-phase flow transients using reduced pressure system and simulant fluid

    International Nuclear Information System (INIS)

    Kocamustafaogullari, G.; Ishii, M.

    1987-01-01

    Scaling criteria for a natural circulation loop under single-phase flow conditions are derived. Based on these criteria, practical applications for designing a scaled-down model are considered. Particular emphasis is placed on scaling a test model at reduced pressure levels compared to a prototype and on fluid-to-fluid scaling. The large number of similarty groups which are to be matched between modell and prototype makes the design of a scale model a challenging tasks. The present study demonstrates a new approach to this clasical problen using two-phase flow scaling parameters. It indicates that a real time scaling is not a practical solution and a scaled-down model should have an accelerated (shortened) time scale. An important result is the proposed new scaling methodology for simulating pressure transients. It is obtained by considerung the changes of the fluid property groups which appear within the two-phase similarity parameters and the single-phase to two-phase flow transition prameters. Sample calculations are performed for modeling two-phase flow transients of a high pressure water system by a low-pressure water system or a Freon system. It is shown that modeling is possible for both cases for simulation pressure transients. However, simulation of phase change transitions is not possible by a reduced pressure water system without distortion in either power or time. (orig.)

  12. A small-scale experimental reactor combined with a simulator for training purposes

    International Nuclear Information System (INIS)

    Destot, M.; Hagendorf, M.; Vanhumbeeck, D.; Lecocq-Bernard, J.

    1981-01-01

    The authors discuss how a small-scale reactor combined to a training simulator can be a valuable aid in all forms of training. They describe the CEN-based SILOETTE reactor in Grenoble and its combined simulator. They also take a look at prospects for the future of the system in the light of experience acquired with the ARIANE reactor and the trends for the development of simulators for training purposes [fr

  13. Pilot scale simulation of cokemaking in integrated steelworks

    Energy Technology Data Exchange (ETDEWEB)

    Mahoney, M.; Andriopoulos, N.; Keating, J.; Loo, C.E.; McGuire, S. [Newcastle Technology Centre, Wallsend (Australia)

    2005-12-01

    Pilot scale coke ovens are widely used to produce coke samples for characterisation and also to assess the coking behaviour of coal blends. The Newcastle Technology Centre of BHP Billiton has built a sophisticated 400 kg oven, which can produce cokes under a range of carefully controlled bulk densities and heating rates. A freely movable heating wall allows the thrust generated at this wall at the different stages of coking oven to be determined. This paper describes comparative work carried out to determine a laboratory stabilisation technique for laboratory cokes. The strength of stabilised cokes are characterised using a number of tumble tests, and correlations between different drum sizes are also given since a major constraint in laboratory testing is the limitation in the mass of sample available. Typical oven wall pressure results, and results obtained from embedded temperature and pressure probes in the charge during coking, are also presented.

  14. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  15. Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang

    2016-01-01

    This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...

  16. Scaling of mesoscale simulations of polymer melts with the bare friction coefficient

    NARCIS (Netherlands)

    Kindt, P.; Kindt, P.; Briels, Willem J.

    2005-01-01

    Both the Rouse and reptation model predict that the dynamics of a polymer melt scale inversely proportional with the Langevin friction coefficient (E). Mesoscale Brownian dynamics simulations of polyethylene validate these scaling predictions, providing the reptational friction (E)R=(E)+(E)C is

  17. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    NARCIS (Netherlands)

    Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike

  18. Simulating space-time uncertainty in continental-scale gridded precipitation fields for agrometeorological modelling

    NARCIS (Netherlands)

    Wit, de A.J.W.; Bruin, de S.

    2006-01-01

    Previous analyses of the effects of uncertainty in precipitation fields on the output of EU Crop Growth Monitoring System (CGMS) demonstrated that the influence on simulated crop yield was limited at national scale, but considerable at local and regional scales. We aim to propagate uncertainty due

  19. Large-eddy simulation with accurate implicit subgrid-scale diffusion

    NARCIS (Netherlands)

    B. Koren (Barry); C. Beets

    1996-01-01

    textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is

  20. The mechanical design and simulation of a scaled H⁻ Penning ion source.

    Science.gov (United States)

    Rutter, T; Faircloth, D; Turner, D; Lawrie, S

    2016-02-01

    The existing ISIS Penning H(-) source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.

  1. The mechanical design and simulation of a scaled H- Penning ion source

    Science.gov (United States)

    Rutter, T.; Faircloth, D.; Turner, D.; Lawrie, S.

    2016-02-01

    The existing ISIS Penning H- source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.

  2. Scale issues in soil hydrology related to measurement and simulation: A case study in Colorado

    Science.gov (United States)

    State variables, such as soil water content (SWC), are typically measured or inferred at very small scales while being simulated at larger scales relevant to spatial management or hillslope areas. Thus there is an implicit spatial disparity that is often ignored. Surface runoff, on the other hand, ...

  3. The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.

    Directory of Open Access Journals (Sweden)

    Giulio Caravagna

    Full Text Available After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii a model of enzymatic futile cycle and (iii a genetic toggle switch. In (ii and (iii we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.

  4. The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.

    Science.gov (United States)

    Caravagna, Giulio; Mauri, Giancarlo; d'Onofrio, Alberto

    2013-01-01

    After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i) the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii) a model of enzymatic futile cycle and (iii) a genetic toggle switch. In (ii) and (iii) we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.

  5. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  6. Scale Model Simulation of Enhanced Geothermal Reservoir Creation

    Science.gov (United States)

    Gutierrez, M.; Frash, L.; Hampton, J.

    2012-12-01

    Geothermal energy technology has successfully provided a means of generating stable base load electricity for many years. However, implementation has been spatially limited to limited availability of high quality traditional hydro-thermal resources possessing the combination of a shallow high heat flow anomaly and an aquifer with sufficient permeability and continuous fluid recharge. Enhanced Geothermal Systems (EGS) has been proposed as a potential solution to enable additional energy production from the non-conventional hydro-thermal resources. Hydraulic fracturing is considered the primary means of creating functional EGS reservoirs at sites where the permeability of the rock is too limited to allow cost effective heat recovery. EGS reservoir creation requires improved fracturing methodology, rheologically controllable fracturing fluids, and temperature hardened proppants. Although large fracture volumes (several cubic km) have been created in the field, circulating fluid through these full volumes and maintaining fracture volumes have proven difficult. Stimulation technology and methodology as used in the oil and gas industry for sedimentary formations are well developed; however, they have not sufficiently been demonstrated for EGS reservoir creation. Insufficient data and measurements under geothermal conditions make it difficult to directly translate experience from the oil and gas industries to EGS applications. To demonstrate the feasibility of EGS reservoir creation and subsequent geothermal energy production, and to improve the understanding of hydraulic and propping in EGS reservoirs, a heated true-triaxial load cell with a high pressure fluid injection system was developed to simulate an EGS system from stimulation to production. This apparatus is capable of loading a 30x30x30 cubic cm rock sample with independent principal stresses up to 13 MPa while simultaneously providing heating up to 180 degree C. Multiple orientated boreholes of 5 to 10 mm

  7. Gyrokinetic simulations of turbulent transport: size scaling and chaotic behaviour

    International Nuclear Information System (INIS)

    Villard, L; Brunner, S; Casati, A; Aghdam, S Khosh; Lapillonne, X; McMillan, B F; Bottino, A; Dannert, T; Goerler, T; Hatzky, R; Jenko, F; Merz, F; Chowdhury, J; Ganesh, R; Garbet, X; Grandgirard, V; Latu, G; Sarazin, Y; Idomura, Y; Jolliet, S

    2010-01-01

    Important steps towards the understanding of turbulent transport have been made with the development of the gyrokinetic framework for describing turbulence and with the emergence of numerical codes able to solve the set of gyrokinetic equations. This paper presents some of the main recent advances in gyrokinetic theory and computing of turbulence. Solving 5D gyrokinetic equations for each species requires state-of-the-art high performance computing techniques involving massively parallel computers and parallel scalable algorithms. The various numerical schemes that have been explored until now, Lagrangian, Eulerian and semi-Lagrangian, each have their advantages and drawbacks. A past controversy regarding the finite size effect (finite ρ * ) in ITG turbulence has now been resolved. It has triggered an intensive benchmarking effort and careful examination of the convergence properties of the different numerical approaches. Now, both Eulerian and Lagrangian global codes are shown to agree and to converge to the flux-tube result in the ρ * → 0 limit. It is found, however, that an appropriate treatment of geometrical terms is necessary: inconsistent approximations that are sometimes used can lead to important discrepancies. Turbulent processes are characterized by a chaotic behaviour, often accompanied by bursts and avalanches. Performing ensemble averages of statistically independent simulations, starting from different initial conditions, is presented as a way to assess the intrinsic variability of turbulent fluxes and obtain reliable estimates of the standard deviation. Further developments concerning non-adiabatic electron dynamics around mode-rational surfaces and electromagnetic effects are discussed.

  8. Anomalous scaling of structure functions and dynamic constraints on turbulence simulations

    International Nuclear Information System (INIS)

    Yakhot, Victor; Sreenivasan, Katepalli R.

    2006-12-01

    The connection between anomalous scaling of structure functions (intermittency) and numerical methods for turbulence simulations is discussed. It is argued that the computational work for direct numerical simulations (DNS) of fully developed turbulence increases as Re 4 , and not as Re 3 expected from Kolmogorov's theory, where Re is a large-scale Reynolds number. Various relations for the moments of acceleration and velocity derivatives are derived. An infinite set of exact constraints on dynamically consistent subgrid models for Large Eddy Simulations (LES) is derived from the Navier-Stokes equations, and some problems of principle associated with existing LES models are highlighted. (author)

  9. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  10. Fully predictive simulation of real-scale cable tray fire based on small-scale laboratory experiments

    Energy Technology Data Exchange (ETDEWEB)

    Beji, Tarek; Merci, Bart [Ghent Univ. (Belgium). Dept. of Flow, Heat and Combustion Mechanics; Bonte, Frederick [Bel V, Brussels (Belgium)

    2015-12-15

    This paper presents a computational fluid dynamics (CFD)-based modelling strategy for real-scale cable tray fires. The challenge was to perform fully predictive simulations (that could be called 'blind' simulations) using solely information from laboratory-scale experiments, in addition to the geometrical arrangement of the cables. The results of the latter experiments were used (1) to construct the fuel molecule and the chemical reaction for combustion, and (2) to estimate the overall pyrolysis and burning behaviour. More particularly, the strategy regarding the second point consists of adopting a surface-based pyrolysis model. Since the burning behaviour of each cable could not be tracked individually (due to computational constraints), 'groups' of cables were modelled with an overall cable surface area equal to the actual value. The results obtained for one large-scale test (a stack of five horizontal trays) are quite encouraging, especially for the peak Heat Release Rate (HRR) that was predicted with a relative deviation of 3 %. The time to reach the peak is however overestimated by 4.7 min (i.e. 94 %). Also, the fire duration is overestimated by 5 min (i.e. 24 %). These discrepancies are mainly attributed to differences in the HRRPUA (heat release rate per unit area) profiles between the small-scale and large-scale. The latter was calculated by estimating the burning area of cables using video fire analysis (VFA).

  11. Tailoring the Variational Implicit Solvent Method for New Challenges: Biomolecular Recognition and Assembly

    Directory of Open Access Journals (Sweden)

    Clarisse Gravina Ricci

    2018-02-01

    Full Text Available Predicting solvation free energies and describing the complex water behavior that plays an important role in essentially all biological processes is a major challenge from the computational standpoint. While an atomistic, explicit description of the solvent can turn out to be too expensive in large biomolecular systems, most implicit solvent methods fail to capture “dewetting” effects and heterogeneous hydration by relying on a pre-established (i.e., guessed solvation interface. Here we focus on the Variational Implicit Solvent Method, an implicit solvent method that adds water “plasticity” back to the picture by formulating the solvation free energy as a functional of all possible solvation interfaces. We survey VISM's applications to the problem of molecular recognition and report some of the most recent efforts to tailor VISM for more challenging scenarios, with the ultimate goal of including thermal fluctuations into the framework. The advances reported herein pave the way to make VISM a uniquely successful approach to characterize complex solvation properties in the recognition and binding of large-scale biomolecular complexes.

  12. Tailoring the Variational Implicit Solvent Method for New Challenges: Biomolecular Recognition and Assembly

    Science.gov (United States)

    Ricci, Clarisse Gravina; Li, Bo; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2018-01-01

    Predicting solvation free energies and describing the complex water behavior that plays an important role in essentially all biological processes is a major challenge from the computational standpoint. While an atomistic, explicit description of the solvent can turn out to be too expensive in large biomolecular systems, most implicit solvent methods fail to capture “dewetting” effects and heterogeneous hydration by relying on a pre-established (i.e., guessed) solvation interface. Here we focus on the Variational Implicit Solvent Method, an implicit solvent method that adds water “plasticity” back to the picture by formulating the solvation free energy as a functional of all possible solvation interfaces. We survey VISM's applications to the problem of molecular recognition and report some of the most recent efforts to tailor VISM for more challenging scenarios, with the ultimate goal of including thermal fluctuations into the framework. The advances reported herein pave the way to make VISM a uniquely successful approach to characterize complex solvation properties in the recognition and binding of large-scale biomolecular complexes. PMID:29484300

  13. Mathematical simulation of column flotation in pilot scale

    International Nuclear Information System (INIS)

    Simpson, J.; Jordan, D.; Cifuentes, G.; Morales, A.; Briones, L.

    2010-01-01

    The Procemin-I area of the Centro Minero Metalurgico Tecnologia y Servicio (CIMM T and S), has a full milling and flotation pilot plant in which several experiences are developed as: optimization of circuits, plant design, procurement of operating parameters, etc. Ones of the equipment in operation is the column flotation to pilot scale, witch have a medium level of automation. The problem presented in the operation of the column flotation is the low relationship during the operation between the operating basis parameters and the metallurgical results. The mathematical models used today to estimate the metallurgical results (i.e.: concentrate, tailing, enrichment and recovery) depending on variables that are manipulated by hand according the operator experience. But the process engineer needs tools without subjective vision to obtain the best performance of the column. The method used to help the column operation was a mathematical model based on the Stepwise Regression then considering empirical relationships between operational variables and experimental results. All the mathematical relationship developed in this study have a good correlation (up 90 % of precision), except one (up 70 %) due by non regular mineralogical feed. (Author) 7 refs.

  14. Simulation test of PIUS-type reactor with large scale experimental apparatus

    International Nuclear Information System (INIS)

    Tamaki, M.; Tsuji, Y.; Ito, T.; Tasaka, K.; Kukita, Yutaka

    1995-01-01

    A large scale experimental apparatus for simulating the PIUS-type reactor has been constructed keeping the volumetric scaling ratio to the realistic reactor model. Fundamental experiments such as a steady state operation and a pump trip simulation were performed. Experimental results were compared with those obtained by the small scale apparatus in JAERI. We have already reported the effectiveness of the feedback control for the primary loop pump speed (PI control) for the stable operation. In this paper this feedback system is modified and the PID control is introduced. This new system worked well for the operation of the PIUS-type reactor even in a rapid transient condition. (author)

  15. Large-scale micromagnetics simulations with dipolar interaction using all-to-all communications

    Directory of Open Access Journals (Sweden)

    Hiroshi Tsukahara

    2016-05-01

    Full Text Available We implement on our micromagnetics simulator low-complexity parallel fast-Fourier-transform algorithms, which reduces the frequency of all-to-all communications from six to two times. Almost all the computation time of micromagnetics simulation is taken up by the calculation of the magnetostatic field which can be calculated using the fast Fourier transform method. The results show that the simulation time is decreased with good scalability, even if the micromagentics simulation is performed using 8192 physical cores. This high parallelization effect enables large-scale micromagentics simulation using over one billion to be performed. Because massively parallel computing is needed to simulate the magnetization dynamics of real permanent magnets composed of many micron-sized grains, it is expected that our simulator reveals how magnetization dynamics influences the coercivity of the permanent magnet.

  16. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Tianwu; Yu, Linglin; Zhang, Chong [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Ma, Jianpeng, E-mail: jpma@bcm.tmc.edu [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Verna and Marrs McLean Department of Biochemistry and Molecular Biology, Baylor College of Medicine, One Baylor Plaza, BCM-125, Houston, Texas 77030 (United States)

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  17. Large Scale Monte Carlo Simulation of Neutrino Interactions Using the Open Science Grid and Commercial Clouds

    International Nuclear Information System (INIS)

    Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.

    2015-01-01

    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)

  18. Multi-Scale Fusion of Information for Uncertainty Quantification and Management in Large-Scale Simulations

    Science.gov (United States)

    2015-12-02

    of completely new nonlinear Malliavin calculus . This type of calculus is important for the analysis and simulation of stationary and/or “causal...been limited by the fact that it requires the solution of an optimization problem with noisy gradients . When using deterministic optimization schemes...under uncertainty. We tested new developments on nonlinear Malliavin calculus , combining reduced basis methods with ANOVA, model validation, on

  19. Development and validation of the Simulation Learning Effectiveness Scale for nursing students.

    Science.gov (United States)

    Pai, Hsiang-Chu

    2016-11-01

    To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students

  20. Application of biomolecular recognition via magnetic nanoparticle in nanobiotechnology

    Science.gov (United States)

    Shen, Wei-Zheng; Cetinel, Sibel; Montemagno, Carlo

    2018-05-01

    The marriage of biomolecular recognition and magnetic nanoparticle creates tremendous opportunities in the development of advanced technology both in academic research and in industrial sectors. In this paper, we review current progress on the magnetic nanoparticle-biomolecule hybrid systems, particularly employing the recognition pairs of DNA-DNA, DNA-protein, protein-protein, and protein-inorganics in several nanobiotechnology application areas, including molecular biology, diagnostics, medical treatment, industrial biocatalysts, and environmental separations.

  1. ROSA-IV Large Scale Test Facility (LSTF) system description for second simulated fuel assembly

    International Nuclear Information System (INIS)

    1990-10-01

    The ROSA-IV Program's Large Scale Test Facility (LSTF) is a test facility for integral simulation of thermal-hydraulic response of a pressurized water reactor (PWR) during small break loss-of-coolant accidents (LOCAs) and transients. In this facility, the PWR core nuclear fuel rods are simulated using electric heater rods. The simulated fuel assembly which was installed during the facility construction was replaced with a new one in 1988. The first test with this second simulated fuel assembly was conducted in December 1988. This report describes the facility configuration and characteristics as of this date (December 1988) including the new simulated fuel assembly design and the facility changes which were made during the testing with the first assembly as well as during the renewal of the simulated fuel assembly. (author)

  2. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  3. Using a million cell simulation of the cerebellum: network scaling and task generality.

    Science.gov (United States)

    Li, Wen-Ke; Hausknecht, Matthew J; Stone, Peter; Mauk, Michael D

    2013-11-01

    Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  5. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  6. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  7. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  8. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    Science.gov (United States)

    Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-11-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.

  9. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    International Nuclear Information System (INIS)

    Kurucz, Charles N.; Waite, Thomas D.; Otano, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-01-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1 ) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60 Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater

  10. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  11. Concept of scaled test facility for simulating the PWR thermalhydraulic behaviour

    International Nuclear Information System (INIS)

    Silva Filho, E.

    1990-01-01

    This work deals with the design of a scaled test facility of a typical pressurized water reactor plant, to simulation of small break Loss-of-Coolant Accident. The computer code RELAP 5/ MOD1 has been utilized to simulate the accident and to compare the test facility behaviour with the reactor plant one. The results demonstrate similar thermal-hydraulic behaviours of the two sistema. (author)

  12. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    KAUST Repository

    VERMA, MAHENDRA K

    2013-09-21

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good ‘weak’ and ‘strong’ scaling for Tarang on these systems.

  13. Establishment of DNS database in a turbulent channel flow by large-scale simulations

    OpenAIRE

    Abe, Hiroyuki; Kawamura, Hiroshi; 阿部 浩幸; 河村 洋

    2008-01-01

    In the present study, we establish statistical DNS (Direct Numerical Simulation) database in a turbulent channel flow with passive scalar transport at high Reynolds numbers and make the data available at our web site (http://murasun.me.noda.tus.ac.jp/turbulence/). The established database is reported together with the implementation of large-scale simulations, representative DNS results and results on turbulence model testing using the DNS data.

  14. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    KAUST Repository

    VERMA, MAHENDRA K; CHATTERJEE, ANANDO; REDDY, K SANDEEP; YADAV, RAKESH K; PAUL, SUPRIYO; CHANDRA, MANI; Samtaney, Ravi

    2013-01-01

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good ‘weak’ and ‘strong’ scaling for Tarang on these systems.

  15. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    Science.gov (United States)

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  16. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  17. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  18. Biomolecular transport and separation in nanotubular networks.

    Energy Technology Data Exchange (ETDEWEB)

    Stachowiak, Jeanne C.; Stevens, Mark Jackson (Sandia National Laboratories, Albuquerque, NM); Robinson, David B.; Branda, Steven S.; Zendejas, Frank; Meagher, Robert J.; Sasaki, Darryl Yoshio; Bachand, George David (Sandia National Laboratories, Albuquerque, NM); Hayden, Carl C.; Sinha, Anupama; Abate, Elisa; Wang, Julia; Carroll-Portillo, Amanda (Sandia National Laboratories, Albuquerque, NM); Liu, Haiqing (Sandia National Laboratories, Albuquerque, NM)

    2010-09-01

    Cell membranes are dynamic substrates that achieve a diverse array of functions through multi-scale reconfigurations. We explore the morphological changes that occur upon protein interaction to model membrane systems that induce deformation of their planar structure to yield nanotube assemblies. In the two examples shown in this report we will describe the use of membrane adhesion and particle trajectory to form lipid nanotubes via mechanical stretching, and protein adsorption onto domains and the induction of membrane curvature through steric pressure. Through this work the relationship between membrane bending rigidity, protein affinity, and line tension of phase separated structures were examined and their relationship in biological membranes explored.

  19. Biomolecular Structure Information from High-Speed Quantum Mechanical Electronic Spectra Calculation.

    Science.gov (United States)

    Seibert, Jakob; Bannwarth, Christoph; Grimme, Stefan

    2017-08-30

    A fully quantum mechanical (QM) treatment to calculate electronic absorption (UV-vis) and circular dichroism (CD) spectra of typical biomolecules with thousands of atoms is presented. With our highly efficient sTDA-xTB method, spectra averaged along structures from molecular dynamics (MD) simulations can be computed in a reasonable time frame on standard desktop computers. This way, nonequilibrium structure and conformational, as well as purely quantum mechanical effects like charge-transfer or exciton-coupling, are included. Different from other contemporary approaches, the entire system is treated quantum mechanically and neither fragmentation nor system-specific adjustment is necessary. Among the systems considered are a large DNA fragment, oligopeptides, and even entire proteins in an implicit solvent. We propose the method in tandem with experimental spectroscopy or X-ray studies for the elucidation of complex (bio)molecular structures including metallo-proteins like myoglobin.

  20. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    Science.gov (United States)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  1. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    Science.gov (United States)

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  2. Simulation of FRET dyes allows quantitative comparison against experimental data

    Science.gov (United States)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  3. Anaerobic Digestion and Biogas Potential: Simulation of Lab and Industrial-Scale Processes

    OpenAIRE

    Ihsan Hamawand; Craig Baillie

    2015-01-01

    In this study, a simulation was carried out using BioWin 3.1 to test the capability of the software to predict the biogas potential for two different anaerobic systems. The two scenarios included: (1) a laboratory-scale batch reactor; and (2) an industrial-scale anaerobic continuous lagoon digester. The measured data related to the operating conditions, the reactor design parameters and the chemical properties of influent wastewater were entered into BioWin. A sensitivity analysis was carried...

  4. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    Science.gov (United States)

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  5. The mechanical design and simulation of a scaled H{sup −} Penning ion source

    Energy Technology Data Exchange (ETDEWEB)

    Rutter, T., E-mail: theo.rutter@stfc.ac.uk; Faircloth, D.; Turner, D.; Lawrie, S. [Rutherford Appleton Laboratory, Didcot OX110QX (United Kingdom)

    2016-02-15

    The existing ISIS Penning H{sup −} source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.

  6. The gyro-radius scaling of ion thermal transport from global numerical simulations of ITG turbulence

    International Nuclear Information System (INIS)

    Ottaviani, M.; Manfredi, G.

    1998-12-01

    A three-dimensional, fluid code is used to study the scaling of ion thermal transport caused by Ion-Temperature-Gradient-Driven (ITG) turbulence. The code includes toroidal effects and is capable of simulating the whole torus. It is found that both close to the ITG threshold and well above threshold, the thermal transport and the turbulence structures exhibit a gyro-Bohm scaling, at least for plasmas with moderate poloidal flow. (author)

  7. Numerical simulation of small-scale mixing processes in the upper ocean and atmospheric boundary layer

    International Nuclear Information System (INIS)

    Druzhinin, O; Troitskaya, Yu; Zilitinkevich, S

    2016-01-01

    The processes of turbulent mixing and momentum and heat exchange occur in the upper ocean at depths up to several dozens of meters and in the atmospheric boundary layer within interval of millimeters to dozens of meters and can not be resolved by known large- scale climate models. Thus small-scale processes need to be parameterized with respect to large scale fields. This parameterization involves the so-called bulk coefficients which relate turbulent fluxes with large-scale fields gradients. The bulk coefficients are dependent on the properties of the small-scale mixing processes which are affected by the upper-ocean stratification and characteristics of surface and internal waves. These dependencies are not well understood at present and need to be clarified. We employ Direct Numerical Simulation (DNS) as a research tool which resolves all relevant flow scales and does not require closure assumptions typical of Large-Eddy and Reynolds Averaged Navier-Stokes simulations (LES and RANS). Thus DNS provides a solid ground for correct parameterization of small-scale mixing processes and also can be used for improving LES and RANS closure models. In particular, we discuss the problems of the interaction between small-scale turbulence and internal gravity waves propagating in the pycnocline in the upper ocean as well as the impact of surface waves on the properties of atmospheric boundary layer over wavy water surface. (paper)

  8. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    Science.gov (United States)

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  9. Experiment-scale molecular simulation study of liquid crystal thin films

    Science.gov (United States)

    Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael

    2014-03-01

    Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.

  10. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues

    Directory of Open Access Journals (Sweden)

    Michele Farisco

    2018-04-01

    Full Text Available Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs, e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.

  11. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Directory of Open Access Journals (Sweden)

    C. M. R. Mateo

    2017-10-01

    Full Text Available Global-scale river models (GRMs are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC is assumed, simulation results deteriorate with finer spatial resolution; Nash–Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  12. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Science.gov (United States)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  13. Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Hillman, Benjamin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marchand, Roger T. [Univ. of Washington, Seattle, WA (United States); Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)

    2017-08-01

    Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4 km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.

  14. Multi-scale Modeling of Compressible Single-phase Flow in Porous Media using Molecular Simulation

    KAUST Repository

    Saad, Ahmed Mohamed

    2016-05-01

    In this study, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell-centered finite difference method with a non-uniform rectangular mesh were used to discretize the simulation domain and solve the governing equations. To speed up the MC simulations, we implemented a recently developed scheme that quickly generates MC Markov chains out of pre-computed ones, based on the reweighting and reconstruction algorithm. This method astonishingly reduces the required computational time by MC simulations from hours to seconds. In addition, the reweighting and reconstruction scheme, which was originally designed to work with the LJ potential model, is extended to work with a potential model that accounts for the molecular quadrupole moment of fluids with non-spherical molecules such as CO2. The potential model was used to simulate the thermodynamic equilibrium properties for single-phase and two-phase systems using the canonical ensemble and the Gibbs ensemble, respectively. Comparing the simulation results with the experimental data showed that the implemented model has an excellent fit outperforming the standard LJ model. To demonstrate the strength of the proposed coupling in terms of computational time efficiency and numerical accuracy in fluid properties, various numerical experiments covering different compressible single-phase flow scenarios were conducted. The novelty in the introduced scheme is in allowing an efficient coupling of the molecular scale and Darcy scale in reservoir simulators. This leads to an accurate description of the thermodynamic behavior of the simulated reservoir fluids; consequently enhancing the confidence in the flow predictions in porous media.

  15. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  16. Large-scale agent-based social simulation : A study on epidemic prediction and control

    NARCIS (Netherlands)

    Zhang, M.

    2016-01-01

    Large-scale agent-based social simulation is gradually proving to be a versatile methodological approach for studying human societies, which could make contributions from policy making in social science, to distributed artificial intelligence and agent technology in computer science, and to theory

  17. A simulation study provided sample size guidance for differential item functioning (DIF) studies using short scales

    DEFF Research Database (Denmark)

    Scott, Neil W.; Fayers, Peter M.; Bottomley, Andrew

    2009-01-01

    Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal...... logistic regression....

  18. Role of cardiolipins in the inner mitochondrial membrane: insight gained through atom-scale simulations

    DEFF Research Database (Denmark)

    Róg, Tomasz; Martinez-Seara, Hector; Munck, Nana

    2009-01-01

    , the exceptional nature of cardiolipins is characterized by their small charged head group connected to typically four hydrocarbon chains. In this work, we present atomic-scale molecular dynamics simulations of the inner mitochondrial membrane modeled as a mixture of cardiolipins (CLs), phosphatidylcholines (PCs...

  19. Design of full scale wave simulator for testing Power Take Off systems for wave energy converters

    DEFF Research Database (Denmark)

    Pedersen, H. C.; Hansen, R. H.; Hansen, Anders Hedegaard

    2016-01-01

    is therefore on the design and commissioning of a full scale wave simulator for testing PTO-systems for point absorbers. The challenge is to be able to design a system, which mimics the behavior of a wave when interacting with a given PTO-system – especially when considering discrete type PTO...

  20. How well do terrestrial biosphere models simulate coarse-scale runoff in the contiguous United States?

    Science.gov (United States)

    C.R. Schwalm; D.N. Huntzinger; R.B. Cook; Y. Wei; I.T. Baker; R.P. Neilson; B. Poulter; Peter Caldwell; G. Sun; H.Q. Tian; N. Zeng

    2015-01-01

    Significant changes in the water cycle are expected under current global environmental change. Robust assessment of present-day water cycle dynamics at continental to global scales is confounded by shortcomings in the observed record. Modeled assessments also yield conflicting results which are linked to differences in model structure and simulation protocol. Here we...

  1. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  2. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  3. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    Science.gov (United States)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  4. Simulating Nationwide Pandemics: Applying the Multi-scale Epidemiologic Simulation and Analysis System to Human Infectious Diseases

    Energy Technology Data Exchange (ETDEWEB)

    Dombroski, M; Melius, C; Edmunds, T; Banks, L E; Bates, T; Wheeler, R

    2008-09-24

    This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to human epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future

  5. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Jesse; Townson, Paul; Vanatta, Matt [EnergySolutions, Engineering and Technology Group, Richland, WA, 99354 (United States)

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution

  6. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  7. Atomistic Simulations of Small-scale Materials Tests of Nuclear Materials

    International Nuclear Information System (INIS)

    Shin, Chan Sun; Jin, Hyung Ha; Kwon, Jun Hyun

    2012-01-01

    Degradation of materials properties under neutron irradiation is one of the key issues affecting the lifetime of nuclear reactors. Evaluating the property changes of materials due to irradiations and understanding the role of microstructural changes on mechanical properties are required for ensuring reliable and safe operation of a nuclear reactor. However, high dose of neuron irradiation capabilities are rather limited and it is difficult to discriminate various factors affecting the property changes of materials. Ion beam irradiation can be used to investigate radiation damage to materials in a controlled way, but has the main limitation of small penetration depth in the length scale of micro meters. Over the past decade, the interest in the investigations of size-dependent mechanical properties has promoted the development of various small-scale materials tests, e.g. nanoindentation and micro/nano-pillar compression tests. Small-scale materials tests can address the issue of the limitation of small penetration depth of ion irradiation. In this paper, we present small-scale materials tests (experiments and simulation) which are applied to study the size and irradiation effects on mechanical properties. We have performed molecular dynamics simulations of nanoindentation and nanopillar compression tests. These atomistic simulations are expected to significantly contribute to the investigation of the fundamental deformation mechanism of small scale irradiated materials

  8. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  9. Atomic scale simulations of arsenic ion implantation and annealing in silicon

    International Nuclear Information System (INIS)

    Caturla, M.J.; Diaz de la Rubia, T.; Jaraiz, M.

    1995-01-01

    We present results of multiple-time-scale simulations of 5, 10 and 15 keV low temperature ion implantation of arsenic on silicon (100), followed by high temperature anneals. The simulations start with a molecular dynamics (MD) calculation of the primary state of damage after 10ps. The results are then coupled to a kinetic Monte Carlo (MC) simulation of bulk defect diffusion and clustering. Dose accumulation is achieved considering that at low temperatures the damage produced in the lattice is stable. After the desired dose is accumulated, the system is annealed at 800 degrees C for several seconds. The results provide information on the evolution for the damage microstructure over macroscopic length and time scales and affords direct comparison to experimental results. We discuss the database of inputs to the MC model and how it affects the diffusion process

  10. Dynamical properties of fractal networks: Scaling, numerical simulations, and physical realizations

    International Nuclear Information System (INIS)

    Nakayama, T.; Yakubo, K.; Orbach, R.L.

    1994-01-01

    This article describes the advances that have been made over the past ten years on the problem of fracton excitations in fractal structures. The relevant systems to this subject are so numerous that focus is limited to a specific structure, the percolating network. Recent progress has followed three directions: scaling, numerical simulations, and experiment. In a happy coincidence, large-scale computations, especially those involving array processors, have become possible in recent years. Experimental techniques such as light- and neutron-scattering experiments have also been developed. Together, they form the basis for a review article useful as a guide to understanding these developments and for charting future research directions. In addition, new numerical simulation results for the dynamical properties of diluted antiferromagnets are presented and interpreted in terms of scaling arguments. The authors hope this article will bring the major advances and future issues facing this field into clearer focus, and will stimulate further research on the dynamical properties of random systems

  11. Gyrokinetic Simulations of Solar Wind Turbulence from Ion to Electron Scales

    International Nuclear Information System (INIS)

    Howes, G. G.; TenBarge, J. M.; Dorland, W.; Numata, R.; Quataert, E.; Schekochihin, A. A.; Tatsuno, T.

    2011-01-01

    A three-dimensional, nonlinear gyrokinetic simulation of plasma turbulence resolving scales from the ion to electron gyroradius with a realistic mass ratio is presented, where all damping is provided by resolved physical mechanisms. The resulting energy spectra are quantitatively consistent with a magnetic power spectrum scaling of k -2.8 as observed in in situ spacecraft measurements of the 'dissipation range' of solar wind turbulence. Despite the strongly nonlinear nature of the turbulence, the linear kinetic Alfven wave mode quantitatively describes the polarization of the turbulent fluctuations. The collisional ion heating is measured at subion-Larmor radius scales, which provides evidence of the ion entropy cascade in an electromagnetic turbulence simulation.

  12. Toward multi-scale simulation of reconnection phenomena in space plasma

    Science.gov (United States)

    Den, M.; Horiuchi, R.; Usami, S.; Tanaka, T.; Ogawa, T.; Ohtani, H.

    2013-12-01

    Magnetic reconnection is considered to play an important role in space phenomena such as substorm in the Earth's magnetosphere. It is well known that magnetic reconnection is controlled by microscopic kinetic mechanism. Frozen-in condition is broken due to particle kinetic effects and collisionless reconnection is triggered when current sheet is compressed as thin as ion kinetic scales under the influence of external driving flow. On the other hand configuration of the magnetic field leading to formation of diffusion region is determined in macroscopic scale and topological change after reconnection is also expressed in macroscopic scale. Thus magnetic reconnection is typical multi-scale phenomenon and microscopic and macroscopic physics are strongly coupled. Recently Horiuchi et al. developed an effective resistivity model based on particle-in-cell (PIC) simulation results obtained in study of collisionless driven reconnection and applied to a global magnetohydrodynamics (MHD) simulation of substorm in the Earth's magnetosphere. They showed reproduction of global behavior in substrom such as dipolarization and flux rope formation by global three dimensional MHD simulation. Usami et al. developed multi-hierarchy simulation model, in which macroscopic and microscopic physics are solved self-consistently and simultaneously. Based on the domain decomposition method, this model consists of three parts: a MHD algorithm for macroscopic global dynamics, a PIC algorithm for microscopic kinetic physics, and an interface algorithm to interlock macro and micro hierarchies. They verified the interface algorithm by simulation of plasma injection flow. In their latest work, this model was applied to collisionless reconnection in an open system and magnetic reconnection was successfully found. In this paper, we describe our approach to clarify multi-scale phenomena and report the current status. Our recent study about extension of the MHD domain to global system is presented. We

  13. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ; Tang, Y.; Liu, H.; Yoon, Hongkyu; Kang, Qinjun; Joekar Niasar, Vahid; Balhoff, Matthew; Dewers, T.; Tartakovsky, Guzel D.; Leist, Emily AE; Hess, Nancy J.; Perkins, William A.; Rakowski, Cynthia L.; Richmond, Marshall C.; Serkowski, John A.; Werth, Charles J.; Valocchi, Albert J.; Wietsma, Thomas W.; Zhang, Changyong

    2016-08-01

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.

  14. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  15. The development of an industrial-scale fed-batch fermentation simulation.

    Science.gov (United States)

    Goldrick, Stephen; Ştefan, Andrei; Lovett, David; Montague, Gary; Lennox, Barry

    2015-01-10

    This paper describes a simulation of an industrial-scale fed-batch fermentation that can be used as a benchmark in process systems analysis and control studies. The simulation was developed using a mechanistic model and validated using historical data collected from an industrial-scale penicillin fermentation process. Each batch was carried out in a 100,000 L bioreactor that used an industrial strain of Penicillium chrysogenum. The manipulated variables recorded during each batch were used as inputs to the simulator and the predicted outputs were then compared with the on-line and off-line measurements recorded in the real process. The simulator adapted a previously published structured model to describe the penicillin fermentation and extended it to include the main environmental effects of dissolved oxygen, viscosity, temperature, pH and dissolved carbon dioxide. In addition the effects of nitrogen and phenylacetic acid concentrations on the biomass and penicillin production rates were also included. The simulated model predictions of all the on-line and off-line process measurements, including the off-gas analysis, were in good agreement with the batch records. The simulator and industrial process data are available to download at www.industrialpenicillinsimulation.com and can be used to evaluate, study and improve on the current control strategy implemented on this facility. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  16. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  17. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  18. Scanning probe and optical tweezer investigations of biomolecular interactions

    International Nuclear Information System (INIS)

    Rigby-Singleton, Shellie

    2002-01-01

    A complex array of intermolecular forces controls the interactions between and within biological molecules. The desire to empirically explore the fundamental forces has led to the development of several biophysical techniques. Of these, the atomic force microscope (AFM) and the optical tweezers have been employed throughout this thesis to monitor the intermolecular forces involved in biomolecular interactions. The AFM is a well-established force sensing technique capable of measuring biomolecular interactions at a single molecule level. However, its versatility has not been extrapolated to the investigation of a drug-enzyme complex. The energy landscape for the force induced dissociation of the DHFR-methotrexate complex was studied. Revealing an energy barrier to dissociation located ∼0.3 nm from the bound state. Unfortunately, the AFM has a limited range of accessible loading rates and in order to profile the complete energy landscape alternative force sensing instrumentation should be considered, for example the BFP and optical tweezers. Thus, this thesis outlines the development and construction an optical trap capable of measuring intermolecular forces between biomolecules at the single molecule level. To demonstrate the force sensing abilities of the optical set up, proof of principle measurements were performed which investigate the interactions between proteins and polymer surfaces subjected to varying degrees of argon plasma treatment. Complementary data was gained from measurements performed independently by the AFM. Changes in polymer resistance to proteins as a response to changes in polymer surface chemistry were detected utilising both AFM and optical tweezers measurements. Finally, the AFM and optical tweezers were employed as ultrasensitive biosensors. Single molecule investigations of the antibody-antigen interaction between the cardiac troponin I marker and its complementary antibody, reveals the impact therapeutic concentrations of heparin have

  19. Instrumental biosensors: new perspectives for the analysis of biomolecular interactions.

    Science.gov (United States)

    Nice, E C; Catimel, B

    1999-04-01

    The use of instrumental biosensors in basic research to measure biomolecular interactions in real time is increasing exponentially. Applications include protein-protein, protein-peptide, DNA-protein, DNA-DNA, and lipid-protein interactions. Such techniques have been applied to, for example, antibody-antigen, receptor-ligand, signal transduction, and nuclear receptor studies. This review outlines the principles of two of the most commonly used instruments and highlights specific operating parameters that will assist in optimising experimental design, data generation, and analysis.

  20. Simulation and scaling analysis of a spherical particle-laden blast wave

    Science.gov (United States)

    Ling, Y.; Balachandar, S.

    2018-05-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  1. Simulation and scaling analysis of a spherical particle-laden blast wave

    Science.gov (United States)

    Ling, Y.; Balachandar, S.

    2018-02-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  2. Energy loss of a high charge bunched electron beam in plasma: Simulations, scaling, and accelerating wakefields

    Directory of Open Access Journals (Sweden)

    J. B. Rosenzweig

    2004-06-01

    Full Text Available The energy loss and gain of a beam in the nonlinear, “blowout” regime of the plasma wakefield accelerator, which features ultrahigh accelerating fields, linear transverse focusing forces, and nonlinear plasma motion, has been asserted, through previous observations in simulations, to scale linearly with beam charge. Additionally, from a recent analysis by Barov et al., it has been concluded that for an infinitesimally short beam, the energy loss is indeed predicted to scale linearly with beam charge for arbitrarily large beam charge. This scaling is predicted to hold despite the onset of a relativistic, nonlinear response by the plasma, when the number of beam particles occupying a cubic plasma skin depth exceeds that of plasma electrons within the same volume. This paper is intended to explore the deviations from linear energy loss using 2D particle-in-cell simulations that arise in the case of experimentally relevant finite length beams. The peak accelerating field in the plasma wave excited behind the finite-length beam is also examined, with the artifact of wave spiking adding to the apparent persistence of linear scaling of the peak field amplitude into the nonlinear regime. At large enough normalized charge, the linear scaling of both decelerating and accelerating fields collapses, with serious consequences for plasma wave excitation efficiency. Using the results of parametric particle-in-cell studies, the implications of these results for observing severe deviations from linear scaling in present and planned experiments are discussed.

  3. Scale-adaptive simulation of a hot jet in cross flow

    Energy Technology Data Exchange (ETDEWEB)

    Duda, B M; Esteve, M-J [AIRBUS Operations S.A.S., Toulouse (France); Menter, F R; Hansen, T, E-mail: benjamin.duda@airbus.com [ANSYS Germany GmbH, Otterfing (Germany)

    2011-12-22

    The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.

  4. Scale-adaptive simulation of a hot jet in cross flow

    International Nuclear Information System (INIS)

    Duda, B M; Esteve, M-J; Menter, F R; Hansen, T

    2011-01-01

    The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.

  5. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  6. Simulation of unsaturated flow and nonreactive solute transport in a heterogeneous soil at the field scale

    International Nuclear Information System (INIS)

    Rockhold, M.L.

    1993-02-01

    A field-scale, unsaturated flow and solute transport experiment at the Las Cruces trench site in New Mexico was simulated as part of a ''blind'' modeling exercise to demonstrate the ability or inability of uncalibrated models to predict unsaturated flow and solute transport in spatially variable porous media. Simulations were conducted using a recently developed multiphase flow and transport simulator. Uniform and heterogeneous soil models were tested, and data from a previous experiment at the site were used with an inverse procedure to estimate water retention parameters. A spatial moment analysis was used to provide a quantitative basis for comparing the mean observed and simulated flow and transport behavior. The results of this study suggest that defensible predictions of waste migration and fate at low-level waste sites will ultimately require site-specific data for model calibration

  7. A new solver for granular avalanche simulation: Indoor experiment verification and field scale case study

    Science.gov (United States)

    Wang, XiaoLiang; Li, JiaChun

    2017-12-01

    A new solver based on the high-resolution scheme with novel treatments of source terms and interface capture for the Savage-Hutter model is developed to simulate granular avalanche flows. The capability to simulate flow spread and deposit processes is verified through indoor experiments of a two-dimensional granular avalanche. Parameter studies show that reduction in bed friction enhances runout efficiency, and that lower earth pressure restraints enlarge the deposit spread. The April 9, 2000, Yigong avalanche in Tibet, China, is simulated as a case study by this new solver. The predicted results, including evolution process, deposit spread, and hazard impacts, generally agree with site observations. It is concluded that the new solver for the Savage-Hutter equation provides a comprehensive software platform for granular avalanche simulation at both experimental and field scales. In particular, the solver can be a valuable tool for providing necessary information for hazard forecasts, disaster mitigation, and countermeasure decisions in mountainous areas.

  8. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    Science.gov (United States)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  9. Multi-scale simulation of single crystal hollow turbine blade manufactured by liquid metal cooling process

    Directory of Open Access Journals (Sweden)

    Xuewei Yan

    2018-02-01

    Full Text Available Liquid metal cooling (LMC process as a powerful directional solidification (DS technique is prospectively used to manufacture single crystal (SC turbine blades. An understanding of the temperature distribution and microstructure evolution in LMC process is required in order to improve the properties of the blades. For this reason, a multi-scale model coupling with the temperature field, grain growth and solute diffusion was established. The temperature distribution and mushy zone evolution of the hollow blade was simulated and discussed. According to the simulation results, the mushy zone might be convex and ahead of the ceramic beads at a lower withdrawal rate, while it will be concave and laggard at a higher withdrawal rate, and a uniform and horizontal mushy zone will be formed at a medium withdrawal rate. Grain growth of the blade at different withdrawal rates was also investigated. Single crystal structures were all selected out at three different withdrawal rates. Moreover, mis-orientation of the grains at 8 mm/min reached ~30°, while it was ~5° and ~15° at 10 mm/min and 12 mm/min, respectively. The model for predicting dendritic morphology was verified by corresponding experiment. Large scale for 2D dendritic distribution in the whole sections was investigated by experiment and simulation, and they presented a well agreement with each other. Keywords: Hollow blade, Single crystal, Multi-scale simulation, Liquid metal cooling

  10. An Improved Scale-Adaptive Simulation Model for Massively Separated Flows

    Directory of Open Access Journals (Sweden)

    Yue Liu

    2018-01-01

    Full Text Available A new hybrid modelling method termed improved scale-adaptive simulation (ISAS is proposed by introducing the von Karman operator into the dissipation term of the turbulence scale equation, proper derivation as well as constant calibration of which is presented, and the typical circular cylinder flow at Re = 3900 is selected for validation. As expected, the proposed ISAS approach with the concept of scale-adaptive appears more efficient than the original SAS method in obtaining a convergent resolution, meanwhile, comparable with DES in visually capturing the fine-scale unsteadiness. Furthermore, the grid sensitivity issue of DES is encouragingly remedied benefiting from the local-adjusted limiter. The ISAS simulation turns out to attractively represent the development of the shear layers and the flow profiles of the recirculation region, and thus, the focused statistical quantities such as the recirculation length and drag coefficient are closer to the available measurements than DES and SAS outputs. In general, the new modelling method, combining the features of DES and SAS concepts, is capable to simulate turbulent structures down to the grid limit in a simple and effective way, which is practically valuable for engineering flows.

  11. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  12. Evaluation of Surface Runoff Generation Processes Using a Rainfall Simulator: A Small Scale Laboratory Experiment

    Science.gov (United States)

    Danáčová, Michaela; Valent, Peter; Výleta, Roman

    2017-12-01

    of 5 mm/min was used to irrigate a corrupted soil sample. The experiment was undertaken for several different slopes, under the condition of no vegetation cover. The results of the rainfall simulation experiment complied with the expectations of a strong relationship between the slope gradient, and the amount of surface runoff generated. The experiments with higher slope gradients were characterised by larger volumes of surface runoff generated, and by shorter times after which it occurred. The experiments with rainfall simulators in both laboratory and field conditions play an important role in better understanding of runoff generation processes. The results of such small scale experiments could be used to estimate some of the parameters of complex hydrological models, which are used to model rainfall-runoff and erosion processes at catchment scale.

  13. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balances

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-08-01

    Satellite-based data, such as vegetation type and fractional vegetation cover, are widely used in hydrologic models to prescribe the vegetation state in a study region. Dynamic global vegetation models (DGVM) simulate land surface hydrology. Incorporation of satellite-based data into a DGVM may enhance a model's ability to simulate land surface hydrology by reducing the task of model parameterization and providing distributed information on land characteristics. The objectives of this study are to (i) modify a DGVM for simulating land surface water balances; (ii) evaluate the modified model in simulating actual evapotranspiration (ET), soil moisture, and surface runoff at regional or watershed scales; and (iii) gain insight into the ability of both the original and modified model to simulate large spatial scale land surface hydrology. To achieve these objectives, we introduce the "LPJ-hydrology" (LH) model which incorporates satellite-based data into the Lund-Potsdam-Jena (LPJ) DGVM. To evaluate the model we ran LH using historical (1981-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells for the conterminous US and for the entire world using coarser climate and land cover data. We evaluated the simulated ET, soil moisture, and surface runoff using a set of observed or simulated data at different spatial scales. Our results demonstrate that spatial patterns of LH-simulated annual ET and surface runoff are in accordance with previously published data for the US; LH-modeled monthly stream flow for 12 major rivers in the US was consistent with observed values respectively during the years 1981-2006 (R2 > 0.46, p 0.52). The modeled mean annual discharges for 10 major rivers worldwide also agreed well (differences day method for snowmelt computation, the addition of the solar radiation effect on snowmelt enabled LH to better simulate monthly stream flow in winter and early spring for rivers located at mid-to-high latitudes. In addition, LH

  14. Tests of peak flow scaling in simulated self-similar river networks

    Science.gov (United States)

    Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.

    2001-01-01

    The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.

  15. Nonlocal strain gradient theory calibration using molecular dynamics simulation based on small scale vibration of nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Mehralian, Fahimeh [Mechanical Engineering Department, Shahrekord University, Shahrekord (Iran, Islamic Republic of); Tadi Beni, Yaghoub, E-mail: tadi@eng.sku.ac.ir [Faculty of Engineering, Shahrekord University, Shahrekord (Iran, Islamic Republic of); Karimi Zeverdejani, Mehran [Mechanical Engineering Department, Shahrekord University, Shahrekord (Iran, Islamic Republic of)

    2017-06-01

    Featured by two small length scale parameters, nonlocal strain gradient theory is utilized to investigate the free vibration of nanotubes. A new size-dependent shell model formulation is developed by using the first order shear deformation theory. The governing equations and boundary conditions are obtained using Hamilton's principle and solved for simply supported boundary condition. As main purpose of this study, since the values of two small length scale parameters are still unknown, they are calibrated by the means of molecular dynamics simulations (MDs). Then, the influences of different parameters such as nonlocal parameter, scale factor, length and thickness on vibration characteristics of nanotubes are studied. It is also shown that increase in thickness and decrease in length parameters intensify the effect of nonlocal parameter and scale factor.

  16. Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame

    Energy Technology Data Exchange (ETDEWEB)

    Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)

    2010-09-15

    Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)

  17. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  18. Three-dimensional simulations of MHD disk winds to hundred AU scale from the protostar

    Directory of Open Access Journals (Sweden)

    Staff Jan

    2014-01-01

    Full Text Available We present the results of four, large scale, three-dimensional magnetohydrodynamics simulations of jets launched from a Keplerian accretion disk. The jets are followed from the source out to 90 AU, a scale that covers several pixels of HST images of nearby protostellar jets. The four simulations analyzed are for four different initial magnetic field configuration threading the surface of the accretion disk with varying degree of openness of the field lines. Our simulations show that jets are heated along their length by many shocks and we compute the line emission that is produced. We find excellent agreement with the observations and use these diagnostics to discriminate between different magnetic field configurations. A two-component jet emerges in simulations with less open field lines along the disk surface. The two-components are physically and dynamically separated with an inner fast and rotating jet and an outer slow jet. The second component weakens and eventually only one-component jet (i.e. only the inner jet is obtained for the most open field configurations. In all of our simulations we find that the faster inner component inherits the Keplerian profile and preserves it to large distances from the source. On the other hand, the outer component is associated with velocity gradients mimicking rotation.

  19. The German VR Simulation Realism Scale--psychometric construction for virtual reality applications with virtual humans.

    Science.gov (United States)

    Poeschl, Sandra; Doering, Nicola

    2013-01-01

    Virtual training applications with high levels of immersion or fidelity (for example for social phobia treatment) produce high levels of presence and therefore belong to the most successful Virtual Reality developments. Whereas display and interaction fidelity (as sub-dimensions of immersion) and their influence on presence are well researched, realism of the displayed simulation depends on the specific application and is therefore difficult to measure. We propose to measure simulation realism by using a self-report questionnaire. The German VR Simulation Realism Scale for VR training applications was developed based on a translation of scene realism items from the Witmer-Singer-Presence Questionnaire. Items for realism of virtual humans (for example for social phobia training applications) were supplemented. A sample of N = 151 students rated simulation realism of a Fear of Public Speaking application. Four factors were derived by item- and principle component analysis (Varimax rotation), representing Scene Realism, Audience Behavior, Audience Appearance and Sound Realism. The scale developed can be used as a starting point for future research and measurement of simulation realism for applications including virtual humans.

  20. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  1. Molecular-scale simulation of electroluminescence in a multilayer white organic light-emitting diode

    DEFF Research Database (Denmark)

    Mesta, Murat; Carvelli, Marco; de Vries, Rein J

    2013-01-01

    we show that it is feasible to carry out Monte Carlo simulations including all of these molecular-scale processes for a hybrid multilayer organic light-emitting diode combining red and green phosphorescent layers with a blue fluorescent layer. The simulated current density and emission profile......In multilayer white organic light-emitting diodes the electronic processes in the various layers--injection and motion of charges as well as generation, diffusion and radiative decay of excitons--should be concerted such that efficient, stable and colour-balanced electroluminescence can occur. Here...

  2. Bridging scales from molecular simulations to classical thermodynamics: density functional theory of capillary condensation in nanopores

    International Nuclear Information System (INIS)

    Neimark, Alexander V; Ravikovitch, Peter I; Vishnyakov, Aleksey

    2003-01-01

    With the example of the capillary condensation of Lennard-Jones fluid in nanopores ranging from 1 to 10 nm, we show that the non-local density functional theory (NLDFT) with properly chosen parameters of intermolecular interactions bridges the scale gap from molecular simulations to macroscopic thermodynamics. On the one hand, NLDFT correctly approximates the results of Monte Carlo simulations (shift of vapour-liquid equilibrium, spinodals, density profiles, adsorption isotherms) for pores wider than about 2 nm. On the other hand, NLDFT smoothly merges (above 7-10 nm) with the Derjaguin-Broekhoff-de Boer equations which represent augmented Laplace-Kelvin equations of capillary condensation and desorption

  3. Simulations of mixing in Inertial Confinement Fusion with front tracking and sub-grid scale models

    Science.gov (United States)

    Rana, Verinder; Lim, Hyunkyung; Melvin, Jeremy; Cheng, Baolian; Glimm, James; Sharp, David

    2015-11-01

    We present two related results. The first discusses the Richtmyer-Meshkov (RMI) and Rayleigh-Taylor instabilities (RTI) and their evolution in Inertial Confinement Fusion simulations. We show the evolution of the RMI to the late time RTI under transport effects and tracking. The role of the sub-grid scales helps capture the interaction of turbulence with diffusive processes. The second assesses the effects of concentration on the physics model and examines the mixing properties in the low Reynolds number hot spot. We discuss the effect of concentration on the Schmidt number. The simulation results are produced using the University of Chicago code FLASH and Stony Brook University's front tracking algorithm.

  4. Atomic-scale simulations of the mechanical deformation of nanocrystalline metals

    DEFF Research Database (Denmark)

    Schiøtz, Jakob; Vegge, Tejs; Di Tolla, Francesco

    1999-01-01

    that the main deformation mode is sliding in the grain boundaries through a large number of uncorrelated events, where a few atoms (or a few tens of atoms) slide with respect to each other. Little dislocation activity is seen in the grain interiors. The localization of the deformation to the grain boundaries......Nanocrystalline metals, i.e., metals in which the grain size is in the nanometer range, have a range of technologically interesting properties including increased hardness and yield strength. We present atomic-scale simulations of the plastic behavior of nanocrystalline copper. The simulations show...

  5. Power Take-Off Simulation for Scale Model Testing of Wave Energy Converters

    Directory of Open Access Journals (Sweden)

    Scott Beatty

    2017-07-01

    Full Text Available Small scale testing in controlled environments is a key stage in the development of potential wave energy conversion technology. Furthermore, it is well known that the physical design and operational quality of the power-take off (PTO used on the small scale model can have vast effects on the tank testing results. Passive mechanical elements such as friction brakes and air dampers or oil filled dashpots are fraught with nonlinear behaviors such as static friction, temperature dependency, and backlash, the effects of which propagate into the wave energy converter (WEC power production data, causing very high uncertainty in the extrapolation of the tank test results to the meaningful full ocean scale. The lack of quality in PTO simulators is an identified barrier to the development of WECs worldwide. A solution to this problem is to use actively controlled actuators for PTO simulation on small scale model wave energy converters. This can be done using force (or torque-controlled feedback systems with suitable instrumentation, enabling the PTO to exert any desired time and/or state dependent reaction force. In this paper, two working experimental PTO simulators on two different wave energy converters are described. The first implementation is on a 1:25 scale self-reacting point absorber wave energy converter with optimum reactive control. The real-time control system, described in detail, is implemented in LabVIEW. The second implementation is on a 1:20 scale single body point absorber under model-predictive control, implemented with a real-time controller in MATLAB/Simulink. Details on the physical hardware, software, and feedback control methods, as well as results, are described for each PTO. Lastly, both sets of real-time control code are to be web-hosted, free for download, modified and used by other researchers and WEC developers.

  6. Synthetic Approach to biomolecular science by cyborg supramolecular chemistry.

    Science.gov (United States)

    Kurihara, Kensuke; Matsuo, Muneyuki; Yamaguchi, Takumi; Sato, Sota

    2018-02-01

    To imitate the essence of living systems via synthetic chemistry approaches has been attempted. With the progress in supramolecular chemistry, it has become possible to synthesize molecules of a size and complexity close to those of biomacromolecules. Recently, the combination of precisely designed supramolecules with biomolecules has generated structural platforms for designing and creating unique molecular systems. Bridging between synthetic chemistry and biomolecular science is also developing methodologies for the creation of artificial cellular systems. This paper provides an overview of the recently expanding interdisciplinary research to fuse artificial molecules with biomolecules, that can deepen our understanding of the dynamical ordering of biomolecules. Using bottom-up approaches based on the precise chemical design, synthesis and hybridization of artificial molecules with biological materials have been realizing the construction of sophisticated platforms having the fundamental functions of living systems. The effective hybrid, molecular cyborg, approaches enable not only the establishment of dynamic systems mimicking nature and thus well-defined models for biophysical understanding, but also the creation of those with highly advanced, integrated functions. This article is part of a Special Issue entitled "Biophysical Exploration of Dynamical Ordering of Biomolecular Systems" edited by Dr. Koichi Kato. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Selected topics in solution-phase biomolecular NMR spectroscopy

    Science.gov (United States)

    Kay, Lewis E.; Frydman, Lucio

    2017-05-01

    Solution bio-NMR spectroscopy continues to enjoy a preeminent role as an important tool in elucidating the structure and dynamics of a range of important biomolecules and in relating these to function. Equally impressive is how NMR continues to 'reinvent' itself through the efforts of many brilliant practitioners who ask increasingly demanding and increasingly biologically relevant questions. The ability to manipulate spin Hamiltonians - almost at will - to dissect the information of interest contributes to the success of the endeavor and ensures that the NMR technology will be well poised to contribute to as yet unknown frontiers in the future. As a tribute to the versatility of solution NMR in biomolecular studies and to the continued rapid advances in the field we present a Virtual Special Issue (VSI) that includes over 40 articles on various aspects of solution-state biomolecular NMR that have been published in the Journal of Magnetic Resonance in the past 7 years. These, in total, help celebrate the achievements of this vibrant field.

  8. Biomolecular logic systems: applications to biosensors and bioactuators

    Science.gov (United States)

    Katz, Evgeny

    2014-05-01

    The paper presents an overview of recent advances in biosensors and bioactuators based on the biocomputing concept. Novel biosensors digitally process multiple biochemical signals through Boolean logic networks of coupled biomolecular reactions and produce output in the form of YES/NO response. Compared to traditional single-analyte sensing devices, biocomputing approach enables a high-fidelity multi-analyte biosensing, particularly beneficial for biomedical applications. Multi-signal digital biosensors thus promise advances in rapid diagnosis and treatment of diseases by processing complex patterns of physiological biomarkers. Specifically, they can provide timely detection and alert to medical emergencies, along with an immediate therapeutic intervention. Application of the biocomputing concept has been successfully demonstrated for systems performing logic analysis of biomarkers corresponding to different injuries, particularly exemplified for liver injury. Wide-ranging applications of multi-analyte digital biosensors in medicine, environmental monitoring and homeland security are anticipated. "Smart" bioactuators, for example for signal-triggered drug release, were designed by interfacing switchable electrodes and biocomputing systems. Integration of novel biosensing and bioactuating systems with the biomolecular information processing systems keeps promise for further scientific advances and numerous practical applications.

  9. Role of biomolecular logic systems in biosensors and bioactuators

    Science.gov (United States)

    Mailloux, Shay; Katz, Evgeny

    2014-09-01

    An overview of recent advances in biosensors and bioactuators based on biocomputing systems is presented. Biosensors digitally process multiple biochemical signals through Boolean logic networks of coupled biomolecular reactions and produce an output in the form of a YES/NO response. Compared to traditional single-analyte sensing devices, the biocomputing approach enables high-fidelity multianalyte biosensing, which is particularly beneficial for biomedical applications. Multisignal digital biosensors thus promise advances in rapid diagnosis and treatment of diseases by processing complex patterns of physiological biomarkers. Specifically, they can provide timely detection and alert medical personnel of medical emergencies together with immediate therapeutic intervention. Application of the biocomputing concept has been successfully demonstrated for systems performing logic analysis of biomarkers corresponding to different injuries, particularly as exemplified for liver injury. Wide-ranging applications of multianalyte digital biosensors in medicine, environmental monitoring, and homeland security are anticipated. "Smart" bioactuators, for signal-triggered drug release, for example, were designed by interfacing switchable electrodes with biocomputing systems. Integration of biosensing and bioactuating systems with biomolecular information processing systems advances the potential for further scientific innovations and various practical applications.

  10. An Overview of Biomolecular Event Extraction from Scientific Documents.

    Science.gov (United States)

    Vanegas, Jorge A; Matos, Sérgio; González, Fabio; Oliveira, José L

    2015-01-01

    This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  11. An Overview of Biomolecular Event Extraction from Scientific Documents

    Directory of Open Access Journals (Sweden)

    Jorge A. Vanegas

    2015-01-01

    Full Text Available This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  12. Ion induced fragmentation of biomolecular systems at low collision energies

    International Nuclear Information System (INIS)

    Bernigaud, V; Adoui, L; Chesnel, J Y; Rangama, J; Huber, B A; Manil, B; Alvarado, F; Bari, S; Hoekstra, R; Postma, J; Schlathoelter, T

    2009-01-01

    In this paper, we present results of different collision experiments between multiply charged ions at low collision energies (in the keV-region) and biomolecular systems. This kind of interaction allows to remove electrons form the biomolecule without transferring a large amount of vibrational excitation energy. Nevertheless, following the ionization of the target, fragmentation of biomolecular species may occur. It is the main objective of this work to study the physical processes involved in the dissociation of highly electronically excited systems. In order to elucidate the intrinsic properties of certain biomolecules (porphyrins and amino acids) we have performed experiments in the gas phase with isolated systems. The obtained results demonstrate the high stability of porphyrins after electron removal. Furthermore, a dependence of the fragmentation pattern produced by multiply charged ions on the isomeric structure of the alanine molecule has been shown. By considering the presence of other surrounding biomolecules (clusters of nucleobases), a strong influence of the environment of the biomolecule on the fragmentation channels and their modification, has been clearly proven. This result is explained, in the thymine and uracil case, by the formation of hydrogen bonds between O and H atoms, which is known to favor planar cluster geometries.

  13. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  14. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  15. Multi-scale simulation of droplet-droplet interactions and coalescence

    CSIR Research Space (South Africa)

    Musehane, Ndivhuwo M

    2016-10-01

    Full Text Available Conference on Computational and Applied Mechanics Potchefstroom 3–5 October 2016 Multi-scale simulation of droplet-droplet interactions and coalescence 1,2Ndivhuwo M. Musehane?, 1Oliver F. Oxtoby and 2Daya B. Reddy 1. Aeronautic Systems, Council... topology changes that result when droplets interact. This work endeavours to eliminate the need to use empirical correlations based on phenomenological models by developing a multi-scale model that predicts the outcome of a collision between droplets from...

  16. PREFACE: Radiation Damage in Biomolecular Systems (RADAM07)

    Science.gov (United States)

    McGuigan, Kevin G.

    2008-03-01

    The annual meeting of the COST P9 Action `Radiation damage in biomolecular systems' took place from 19-22 June 2007 in the Royal College of Surgeons in Ireland, in Dublin. The conference was structured into 5 Working Group sessions: Electrons and biomolecular interactions Ions and biomolecular interactions Radiation in physiological environments Theoretical developments for radiation damage Track structure in cells Each of the five working groups presented two sessions of invited talks. Professor Ron Chesser of Texas Tech University, USA gave a riveting plenary talk on `Mechanisms of Adaptive Radiation Responses in Mammals at Chernobyl' and the implications his work has on the Linear-No Threshold model of radiation damage. In addition, this was the first RADAM meeting to take place after the Alexander Litvenenko affair and we were fortunate to have one of the leading scientists involved in the European response Professor Herwig Paretzke of GSF-Institut für Strahlenschutz, Neuherberg, Germany, available to speak. The remaining contributions were presented in the poster session. A total of 72 scientific contributions (32 oral, 40 poster), presented by 97 participants from 22 different countries, gave an overview on the current progress in the 5 different subfields. A 1-day pre-conference `Early Researcher Tutorial Workshop' on the same topic kicked off on 19 June attended by more than 40 postgrads, postdocs and senior researchers. Twenty papers, based on these reports, are included in this volume of Journal of Physics: Conference Series. All the contributions in this volume were fully refereed, and they represent a sample of the courses, invited talks and contributed talks presented during RADAM07. The interdisciplinary RADAM07 conference brought together researchers from a variety of different fields with a common interest in biomolecular radiation damage. This is reflected by the disparate backgrounds of the authors of the papers presented in these proceedings

  17. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  18. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  19. Data for Figures and Tables in "Impacts of Different Characterizations of Large-Scale Background on Simulated Regional-Scale Ozone Over the Continental U.S."

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains the data used in the Figures and Tables of the manuscript "Impacts of Different Characterizations of Large-Scale Background on Simulated...

  20. An effective hierarchical model for the biomolecular covalent bond: an approach integrating artificial chemistry and an actual terrestrial life system.

    Science.gov (United States)

    Oohashi, Tsutomu; Ueno, Osamu; Maekawa, Tadao; Kawai, Norie; Nishina, Emi; Honda, Manabu

    2009-01-01

    Under the AChem paradigm and the programmed self-decomposition (PSD) model, we propose a hierarchical model for the biomolecular covalent bond (HBCB model). This model assumes that terrestrial organisms arrange their biomolecules in a hierarchical structure according to the energy strength of their covalent bonds. It also assumes that they have evolutionarily selected the PSD mechanism of turning biological polymers (BPs) into biological monomers (BMs) as an efficient biomolecular recycling strategy We have examined the validity and effectiveness of the HBCB model by coordinating two complementary approaches: biological experiments using existent terrestrial life, and simulation experiments using an AChem system. Biological experiments have shown that terrestrial life possesses a PSD mechanism as an endergonic, genetically regulated process and that hydrolysis, which decomposes a BP into BMs, is one of the main processes of such a mechanism. In simulation experiments, we compared different virtual self-decomposition processes. The virtual species in which the self-decomposition process mainly involved covalent bond cleavage from a BP to BMs showed evolutionary superiority over other species in which the self-decomposition process involved cleavage from BP to classes lower than BM. These converging findings strongly support the existence of PSD and the validity and effectiveness of the HBCB model.

  1. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  2. Resistance scaling function for two-dimensional superconductors and Monte Carlo vortex-fluctuation simulations

    International Nuclear Information System (INIS)

    Minnhagen, P.; Weber, H.

    1985-01-01

    A Monte Carlo simulation of the Ginsburg-Landau Coulomb-gas model for vortex fluctuations is described and compared to the measured resistance scaling function for two-dimensional superconductors. This constitutes a new, more direct way of confirming the vortex-fluctuation explanation for the resistive tail of high-sheet-resistance superconducting films. The Monte Carlo data obtained indicate a striking accordance between theory and experiments

  3. The effects of large scale processing on caesium leaching from cemented simulant sodium nitrate waste

    International Nuclear Information System (INIS)

    Lee, D.J.; Brown, D.J.

    1982-01-01

    The effects of large scale processing on the properties of cemented simulant sodium nitrate waste have been investigated. Leach tests have been performed on full-size drums, cores and laboratory samples of cement formulations containing Ordinary Portland Cement (OPC), Sulphate Resisting Portland Cement (SRPC) and a blended cement (90% ground granulated blast furnace slag/10% OPC). In addition, development of the cement hydration exotherms with time and the temperature distribution in 220 dm 3 samples have been followed. (author)

  4. PARSEC-SCALE FARADAY ROTATION MEASURES FROM GENERAL RELATIVISTIC MAGNETOHYDRODYNAMIC SIMULATIONS OF ACTIVE GALACTIC NUCLEUS JETS

    International Nuclear Information System (INIS)

    Broderick, Avery E.; McKinney, Jonathan C.

    2010-01-01

    It is now possible to compare global three-dimensional general relativistic magnetohydrodynamic (GRMHD) jet formation simulations directly to multi-wavelength polarized VLBI observations of the pc-scale structure of active galactic nucleus (AGN) jets. Unlike the jet emission, which requires post hoc modeling of the nonthermal electrons, the Faraday rotation measures (RMs) depend primarily upon simulated quantities and thus provide a direct way to confront simulations with observations. We compute RM distributions of a three-dimensional global GRMHD jet formation simulation, extrapolated in a self-consistent manner to ∼10 pc scales, and explore the dependence upon model and observational parameters, emphasizing the signatures of structures generic to the theory of MHD jets. With typical parameters, we find that it is possible to reproduce the observed magnitudes and many of the structures found in AGN jet RMs, including the presence of transverse RM gradients. In our simulations, the RMs are generated in the circum-jet material, hydrodynamically a smooth extension of the jet itself, containing ordered toroidally dominated magnetic fields. This results in a particular bilateral morphology that is unlikely to arise due to Faraday rotation in distant foreground clouds. However, critical to efforts to probe the Faraday screen will be resolving the transverse jet structure. Therefore, the RMs of radio cores may not be reliable indicators of the properties of the rotating medium. Finally, we are able to constrain the particle content of the jet, finding that at pc scales AGN jets are electromagnetically dominated, with roughly 2% of the comoving energy in nonthermal leptons and much less in baryons.

  5. PARSEC-SCALE FARADAY ROTATION MEASURES FROM GENERAL RELATIVISTIC MAGNETOHYDRODYNAMIC SIMULATIONS OF ACTIVE GALACTIC NUCLEUS JETS

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Avery E [Canadian Institute for Theoretical Astrophysics, 60 St. George St., Toronto, ON M5S 3H8 (Canada); McKinney, Jonathan C., E-mail: aeb@cita.utoronto.c, E-mail: jmckinne@stanford.ed [Department of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94305-4060 (United States)

    2010-12-10

    It is now possible to compare global three-dimensional general relativistic magnetohydrodynamic (GRMHD) jet formation simulations directly to multi-wavelength polarized VLBI observations of the pc-scale structure of active galactic nucleus (AGN) jets. Unlike the jet emission, which requires post hoc modeling of the nonthermal electrons, the Faraday rotation measures (RMs) depend primarily upon simulated quantities and thus provide a direct way to confront simulations with observations. We compute RM distributions of a three-dimensional global GRMHD jet formation simulation, extrapolated in a self-consistent manner to {approx}10 pc scales, and explore the dependence upon model and observational parameters, emphasizing the signatures of structures generic to the theory of MHD jets. With typical parameters, we find that it is possible to reproduce the observed magnitudes and many of the structures found in AGN jet RMs, including the presence of transverse RM gradients. In our simulations, the RMs are generated in the circum-jet material, hydrodynamically a smooth extension of the jet itself, containing ordered toroidally dominated magnetic fields. This results in a particular bilateral morphology that is unlikely to arise due to Faraday rotation in distant foreground clouds. However, critical to efforts to probe the Faraday screen will be resolving the transverse jet structure. Therefore, the RMs of radio cores may not be reliable indicators of the properties of the rotating medium. Finally, we are able to constrain the particle content of the jet, finding that at pc scales AGN jets are electromagnetically dominated, with roughly 2% of the comoving energy in nonthermal leptons and much less in baryons.

  6. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  7. Large eddy simulation of new subgrid scale model for three-dimensional bundle flows

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    2004-01-01

    Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)

  8. Anaerobic Digestion and Biogas Potential: Simulation of Lab and Industrial-Scale Processes

    Directory of Open Access Journals (Sweden)

    Ihsan Hamawand

    2015-01-01

    Full Text Available In this study, a simulation was carried out using BioWin 3.1 to test the capability of the software to predict the biogas potential for two different anaerobic systems. The two scenarios included: (1 a laboratory-scale batch reactor; and (2 an industrial-scale anaerobic continuous lagoon digester. The measured data related to the operating conditions, the reactor design parameters and the chemical properties of influent wastewater were entered into BioWin. A sensitivity analysis was carried out to identify the sensitivity of the most important default parameters in the software’s models. BioWin was then calibrated by matching the predicted data with measured data and used to simulate other parameters that were unmeasured or deemed uncertain. In addition, statistical analyses were carried out using evaluation indices, such as the coefficient of determination (R-squared, the correlation coefficient (r and its significance (p-value, the general standard deviation (SD and the Willmott index of agreement, to evaluate the agreement between the software prediction and the measured data. The results have shown that after calibration, BioWin can be used reliably to simulate both small-scale batch reactors and industrial-scale digesters with a mean absolute percentage error (MAPE of less than 10% and very good values of the indexes. Furthermore, by changing the default parameters in BioWin, which is a way of calibrating the models in the software, as well, this may provide information about the performance of the digester. Furthermore, the results of this study showed there may be an over estimation for biogas generated from industrial-scale digesters. More sophisticated analytical devices may be required for reliable measurements of biogas quality and quantity.

  9. Large-Scale Covariability Between Aerosol and Precipitation Over the 7-SEAS Region: Observations and Simulations

    Science.gov (United States)

    Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong; hide

    2012-01-01

    One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5

  10. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  11. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  12. Validation of a power-law noise model for simulating small-scale breast tissue

    International Nuclear Information System (INIS)

    Reiser, I; Edwards, A; Nishikawa, R M

    2013-01-01

    We have validated a small-scale breast tissue model based on power-law noise. A set of 110 patient images served as truth. The statistical model parameters were determined by matching the radially averaged power-spectrum of the projected simulated tissue with that of the central tomosynthesis patient breast projections. Observer performance in a signal-known exactly detection task in simulated and actual breast backgrounds was compared. Observers included human readers, a pre-whitening observer model and a channelized Hotelling observer model. For all observers, good agreement between performance in the simulated and actual backgrounds was found, both in the tomosynthesis central projections and the reconstructed images. This tissue model can be used for breast x-ray imaging system optimization. The complete statistical description of the model is provided. (paper)

  13. Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations

    Science.gov (United States)

    Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang

    2017-09-01

    Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.

  14. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  15. Properties important to mixing and simulant recommendations for WTP full-scale vessel testing

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-12-01

    Full Scale Vessel Testing (FSVT) is being planned by Bechtel National, Inc., to demonstrate the ability of the standard high solids vessel design (SHSVD) to meet mixing requirements over the range of fluid properties planned for processing in the Pretreatment Facility (PTF) of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. WTP personnel requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in FSVT. Among the tasks assigned to SRNL was to develop a list of waste properties that are important to pulse-jet mixer (PJM) performance in WTP vessels with elevated concentrations of solids.

  16. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  17. Pore-Scale Simulation for Predicting Material Transport Through Porous Media

    International Nuclear Information System (INIS)

    Goichi Itoh; Jinya Nakamura; Koji Kono; Tadashi Watanabe; Hirotada Ohashi; Yu Chen; Shinya Nagasaki

    2002-01-01

    Microscopic models of real-coded lattice gas automata (RLG) method with a special boundary condition and lattice Boltzmann method (LBM) are developed for simulating three-dimensional fluid dynamics in complex geometry. Those models enable us to simulate pore-scale fluid dynamics that is an essential part for predicting material transport in porous media precisely. For large-scale simulation of porous media with high resolution, the RLG and LBM programs are designed for parallel computation. Simulation results of porous media flow by the LBM with different pressure gradient conditions show quantitative agreements with macroscopic relations of Darcy's law and Kozeny-Carman equation. As for the efficiency of parallel computing, a standard parallel computation by using MPI (Message Passing Interface) is compared with the hybrid parallel computation of MPI-node parallel technique. The benchmark tests conclude that in case of using large number of computing node, the parallel performance declines due to increase of data communication between nodes and the hybrid parallel computation totally shows better performance in comparison with the standard parallel computation. (authors)

  18. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I. (University of California, San Diego); Winey, J. Michael (Washington State University); Gupta, Yogendra Mohan (Washington State University); Lane, J. Matthew D.; Ditmire, Todd (University of Texas at Austin); Quevedo, Hernan J. (University of Texas at Austin)

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  19. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  20. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  1. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva

    1998-07-01

    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  2. Anatomically detailed and large-scale simulations studying synapse loss and synchrony using NeuroBox

    Directory of Open Access Journals (Sweden)

    Markus eBreit

    2016-02-01

    Full Text Available The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic or reconstruction to the simulation platform UG 4 (which harbors a neuroscientific portfolio and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g. new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations.

  3. Simulations of ecosystem hydrological processes using a unified multi-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaofan; Liu, Chongxuan; Fang, Yilin; Hinkle, Ross; Li, Hong-Yi; Bailey, Vanessa; Bond-Lamberty, Ben

    2015-01-01

    This paper presents a unified multi-scale model (UMSM) that we developed to simulate hydrological processes in an ecosystem containing both surface water and groundwater. The UMSM approach modifies the Navier–Stokes equation by adding a Darcy force term to formulate a single set of equations to describe fluid momentum and uses a generalized equation to describe fluid mass balance. The advantage of the approach is that the single set of the equations can describe hydrological processes in both surface water and groundwater where different models are traditionally required to simulate fluid flow. This feature of the UMSM significantly facilitates modelling of hydrological processes in ecosystems, especially at locations where soil/sediment may be frequently inundated and drained in response to precipitation, regional hydrological and climate changes. In this paper, the UMSM was benchmarked using WASH123D, a model commonly used for simulating coupled surface water and groundwater flow. Disney Wilderness Preserve (DWP) site at the Kissimmee, Florida, where active field monitoring and measurements are ongoing to understand hydrological and biogeochemical processes, was then used as an example to illustrate the UMSM modelling approach. The simulations results demonstrated that the DWP site is subject to the frequent changes in soil saturation, the geometry and volume of surface water bodies, and groundwater and surface water exchange. All the hydrological phenomena in surface water and groundwater components including inundation and draining, river bank flow, groundwater table change, soil saturation, hydrological interactions between groundwater and surface water, and the migration of surface water and groundwater interfaces can be simultaneously simulated using the UMSM. Overall, the UMSM offers a cross-scale approach that is particularly suitable to simulate coupled surface and ground water flow in ecosystems with strong surface water and groundwater interactions.

  4. Commercial applications of large-scale Research and Development computer simulation technologies

    International Nuclear Information System (INIS)

    Kuok Mee Ling; Pascal Chen; Wen Ho Lee

    1998-01-01

    The potential commercial applications of two large-scale R and D computer simulation technologies are presented. One such technology is based on the numerical solution of the hydrodynamics equations, and is embodied in the two-dimensional Eulerian code EULE2D, which solves the hydrodynamic equations with various models for the equation of state (EOS), constitutive relations and fracture mechanics. EULE2D is an R and D code originally developed to design and analyze conventional munitions for anti-armor penetrations such as shaped charges, explosive formed projectiles, and kinetic energy rods. Simulated results agree very well with actual experiments. A commercial application presented here is the design and simulation of shaped charges for oil and gas well bore perforation. The other R and D simulation technology is based on the numerical solution of Maxwell's partial differential equations of electromagnetics in space and time, and is implemented in the three-dimensional code FDTD-SPICE, which solves Maxwell's equations in the time domain with finite-differences in the three spatial dimensions and calls SPICE for information when nonlinear active devices are involved. The FDTD method has been used in the radar cross-section modeling of military aircrafts and many other electromagnetic phenomena. The coupling of FDTD method with SPICE, a popular circuit and device simulation program, provides a powerful tool for the simulation and design of microwave and millimeter-wave circuits containing nonlinear active semiconductor devices. A commercial application of FDTD-SPICE presented here is the simulation of a two-element active antenna system. The simulation results and the experimental measurements are in excellent agreement. (Author)

  5. A priori analysis of differential diffusion for model development for scale-resolving simulations

    Science.gov (United States)

    Hunger, Franziska; Dietzsch, Felix; Gauding, Michael; Hasse, Christian

    2018-01-01

    The present study analyzes differential diffusion and the mechanisms responsible for it with regard to the turbulent/nonturbulent interface (TNTI) with special focus on model development for scale-resolving simulations. In order to analyze differences between resolved and subfilter phenomena, direct numerical simulation (DNS) data are compared with explicitly filtered data. The DNS database stems from a temporally evolving turbulent plane jet transporting two passive scalars with Schmidt numbers of unity and 0.25 presented by Hunger et al. [F. Hunger et al., J. Fluid Mech. 802, R5 (2016), 10.1017/jfm.2016.471]. The objective of this research is twofold: (i) to compare the position of the turbulent-nonturbulent interface between the original DNS data and the filtered data and (ii) to analyze differential diffusion and the impact of the TNTI with regard to scale resolution in the filtered DNS data. For the latter, differential diffusion quantities are studied, clearly showing the decrease of differential diffusion at the resolved scales with increasing filter width. A transport equation for the scalar differences is evaluated. Finally, the existence of large scalar gradients, gradient alignment, and the diffusive fluxes being the physical mechanisms responsible for the separation of the two scalars are compared between the resolved and subfilter scales.

  6. Cross-Scale Baroclinic Simulation of the Effect of Channel Dredging in an Estuarine Setting

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2018-02-01

    Full Text Available Holistic simulation approaches are often required to assess human impacts on a river-estuary-coastal system, due to the intrinsically linked processes of contrasting spatial scales. In this paper, a Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM is applied in quantifying the impact of a proposed hydraulic engineering project on the estuarine hydrodynamics. The project involves channel dredging and land expansion that traverse several spatial scales on an ocean-estuary-river-tributary axis. SCHISM is suitable for this undertaking due to its flexible horizontal and vertical grid design and, more importantly, its efficient high-order implicit schemes applied in both the momentum and transport calculations. These techniques and their advantages are briefly described along with the model setup. The model features a mixed horizontal grid with quadrangles following the shipping channels and triangles resolving complex geometries elsewhere. The grid resolution ranges from ~6.3 km in the coastal ocean to 15 m in the project area. Even with this kind of extreme scale contrast, the baroclinic model still runs stably and accurately at a time step of 2 min, courtesy of the implicit schemes. We highlight that the implicit transport solver alone reduces the total computational cost by 82%, as compared to its explicit counterpart. The base model is shown to be well calibrated, then it is applied in simulating the proposed project scenario. The project-induced modifications on salinity intrusion, gravitational circulation, and transient events are quantified and analyzed.

  7. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  8. Simulation of particle diffusion in a spectrum of electrostatic turbulence. Low frequency Bohm or percolation scaling

    International Nuclear Information System (INIS)

    Reuss, J.D.; Misguich, J.H.

    1996-02-01

    An important point for turbulent transport consists in determining the scaling law for the diffusion coefficient D due to electrostatic turbulence. It is well-known that for weak amplitudes or large frequencies, the reduced diffusion coefficient has a quasi-linear like (or gyro-Bohm like) scaling, while for large amplitudes or small frequencies it has been traditionally believed that the scaling is Bohm-like. The aim of this work consists to test this prediction for a given realistic model. This problem is studied by direct simulation of particle trajectories. Guiding centre diffusion in a spectrum of electrostatic turbulence is computed for test particles in a model spectrum, by means of a new parallelized code RADIGUET 2. The results indicate a continuous transition for large amplitudes toward a value which is compatible with the Isichenko percolation prediction. (author)

  9. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    Energy Technology Data Exchange (ETDEWEB)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  10. Simulated Nano scale Peeling Process of Monolayer Graphene Sheet: Effect of Edge Structure and Lifting Position

    International Nuclear Information System (INIS)

    Sasaki, N.; Okamoto, H.; Masuda, S.; Itamura, N.; Miura, K.

    2010-01-01

    The nanoscale peeling of the graphene sheet on the graphite surface is numerically studied by molecular mechanics simulation. For center-lifting case, the successive partial peelings of the graphene around the lifting center appear as discrete jumps in the force curve, which induce the arched deformation of the graphene sheet. For edge-lifting case, marked atomic-scale friction of the graphene sheet during the nanoscale peeling process is found. During the surface contact, the graphene sheet takes the atomic-scale sliding motion. The period of the peeling force curve during the surface contact decreases to the lattice period of the graphite. During the line contact, the graphene sheet also takes the stick-slip sliding motion. These findings indicate the possibility of not only the direct observation of the atomic-scale friction of the graphene sheet at the tip/surface interface but also the identification of the lattice orientation and the edge structure of the graphene sheet.

  11. Bridging the scales in atmospheric composition simulations using a nudging technique

    Science.gov (United States)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean

  12. Simulating movement of tRNA through the ribosome during hybrid-state formation.

    Science.gov (United States)

    Whitford, Paul C; Sanbonmatsu, Karissa Y

    2013-09-28

    Biomolecular simulations provide a means for exploring the relationship between flexibility, energetics, structure, and function. With the availability of atomic models from X-ray crystallography and cryoelectron microscopy (cryo-EM), and rapid increases in computing capacity, it is now possible to apply molecular dynamics (MD) simulations to large biomolecular machines, and systematically partition the factors that contribute to function. A large biomolecular complex for which atomic models are available is the ribosome. In the cell, the ribosome reads messenger RNA (mRNA) in order to synthesize proteins. During this essential process, the ribosome undergoes a wide range of conformational rearrangements. One of the most poorly understood transitions is translocation: the process by which transfer RNA (tRNA) molecules move between binding sites inside of the ribosome. The first step of translocation is the adoption of a "hybrid" configuration by the tRNAs, which is accompanied by large-scale rotations in the ribosomal subunits. To illuminate the relationship between these rearrangements, we apply MD simulations using a multi-basin structure-based (SMOG) model, together with targeted molecular dynamics protocols. From 120 simulated transitions, we demonstrate the viability of a particular route during P/E hybrid-state formation, where there is asynchronous movement along rotation and tRNA coordinates. These simulations not only suggest an ordering of events, but they highlight atomic interactions that may influence the kinetics of hybrid-state formation. From these simulations, we also identify steric features (H74 and surrounding residues) encountered during the hybrid transition, and observe that flexibility of the single-stranded 3'-CCA tail is essential for it to reach the endpoint. Together, these simulations provide a set of structural and energetic signatures that suggest strategies for modulating the physical-chemical properties of protein synthesis by the

  13. Large-scale introduction of wind power stations in the Swedish grid: a simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L

    1978-08-01

    This report describes a simulation study on the factors to be considered if wind power were to be introduced to the south Swedish power grid on a large scale. The simulations are based upon a heuristic power generation planning model, developed for the purpose. The heuristic technique reflects the actual running strategies of a big power company with suitable accuracy. All simulations refer to certain typical days in 1976 to which all wind data and system characteristics are related. The installed amount of wind power will not be subject to optimization. All differences between planned and real wind power generation is equalized by regulation of the hydro power. The simulations made differ according to how the installed amount of wind power is handled in the power generation planning. The simulations indicate that the power system examined could well bear an introduction of wind power up to a level of 20% of the total power installed. This result is of course valid only for the days examined and does not necessarily apply to the present day structure of the system.

  14. Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation

    Directory of Open Access Journals (Sweden)

    Farhana Tisa

    2014-01-01

    Full Text Available Simulation of fluidized bed reactor (FBR was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP. The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment.

  15. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  16. Cosmological Simulations with Scale-Free Initial Conditions. I. Adiabatic Hydrodynamics

    International Nuclear Information System (INIS)

    Owen, J.M.; Weinberg, D.H.; Evrard, A.E.; Hernquist, L.; Katz, N.

    1998-01-01

    We analyze hierarchical structure formation based on scale-free initial conditions in an Einstein endash de Sitter universe, including a baryonic component with Ω bary = 0.05. We present three independent, smoothed particle hydrodynamics (SPH) simulations, performed at two resolutions (32 3 and 64 3 dark matter and baryonic particles) and with two different SPH codes (TreeSPH and P3MSPH). Each simulation is based on identical initial conditions, which consist of Gaussian-distributed initial density fluctuations that have a power spectrum P(k) ∝ k -1 . The baryonic material is modeled as an ideal gas subject only to shock heating and adiabatic heating and cooling; radiative cooling and photoionization heating are not included. The evolution is expected to be self-similar in time, and under certain restrictions we identify the expected scalings for many properties of the distribution of collapsed objects in all three realizations. The distributions of dark matter masses, baryon masses, and mass- and emission-weighted temperatures scale quite reliably. However, the density estimates in the central regions of these structures are determined by the degree of numerical resolution. As a result, mean gas densities and Bremsstrahlung luminosities obey the expected scalings only when calculated within a limited dynamic range in density contrast. The temperatures and luminosities of the groups show tight correlations with the baryon masses, which we find can be well represented by power laws. The Press-Schechter (PS) approximation predicts the distribution of group dark matter and baryon masses fairly well, though it tends to overestimate the baryon masses. Combining the PS mass distribution with the measured relations for T(M) and L(M) predicts the temperature and luminosity distributions fairly accurately, though there are some discrepancies at high temperatures/luminosities. In general the three simulations agree well for the properties of resolved groups, where a group

  17. A new framework for the analysis of continental-scale convection-resolving climate simulations

    Science.gov (United States)

    Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.

    2017-12-01

    High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393

  18. Comparison of Waste Feed Delivery Small Scale Mixing Demonstration Simulant to Hanford Waste

    Energy Technology Data Exchange (ETDEWEB)

    Wells, Beric E.; Gauglitz, Phillip A.; Rector, David R.

    2012-07-10

    The Hanford double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions' Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems. A series of these tests have used a five-part simulant composed of particles of different size and density and designed to be equal or more challenging than AY-102 waste. This five-part simulant, however, has not been compared with the broad range of Hanford waste, and thus there is an additional uncertainty that this simulant may not be as challenging as the most difficult Hanford waste. The purpose of this study is to quantify how the current five-part simulant compares to all of the Hanford sludge waste, and to suggest alternate simulants that could be tested to reduce the uncertainty in applying the current testing results to potentially more challenging wastes.

  19. Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements

    International Nuclear Information System (INIS)

    Kim, Sang-Koog

    2010-01-01

    Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.

  20. Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements

    Science.gov (United States)

    Kim, Sang-Koog

    2010-07-01

    Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.

  1. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire.

    Science.gov (United States)

    Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran

    2017-02-01

    The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.

  2. Review of MEMS differential scanning calorimetry for biomolecular study

    Science.gov (United States)

    Yu, Shifeng; Wang, Shuyu; Lu, Ming; Zuo, Lei

    2017-12-01

    Differential scanning calorimetry (DSC) is one of the few techniques that allow direct determination of enthalpy values for binding reactions and conformational transitions in biomolecules. It provides the thermodynamics information of the biomolecules which consists of Gibbs free energy, enthalpy and entropy in a straightforward manner that enables deep understanding of the structure function relationship in biomolecules such as the folding/unfolding of protein and DNA, and ligand bindings. This review provides an up to date overview of the applications of DSC in biomolecular study such as the bovine serum albumin denaturation study, the relationship between the melting point of lysozyme and the scanning rate. We also introduce the recent advances of the development of micro-electro-mechanic-system (MEMS) based DSCs.

  3. Techniques of biomolecular quantification through AMS detection of radiocarbon

    International Nuclear Information System (INIS)

    Vogel, S.J.; Turteltaub, K.W.; Frantz, C.; Felton, J.S.; Gledhill, B.L.

    1992-01-01

    Accelerator mass spectrometry offers a large gain over scintillation counting in sensitivity for detecting radiocarbon in biomolecular tracing. Application of this sensitivity requires new considerations of procedures to extract or isolate the carbon fraction to be quantified, to inventory all carbon in the sample, to prepare graphite from the sample for use in the spectrometer, and to derive a meaningful quantification from the measured isotope ratio. These procedures need to be accomplished without contaminating the sample with radiocarbon, which may be ubiquitous in laboratories and on equipment previously used for higher dose, scintillation experiments. Disposable equipment, materials and surfaces are used to control these contaminations. Quantification of attomole amounts of labeled substances are possible through these techniques

  4. Hybrid organic semiconductor lasers for bio-molecular sensing.

    Science.gov (United States)

    Haughey, Anne-Marie; Foucher, Caroline; Guilhabert, Benoit; Kanibolotsky, Alexander L; Skabara, Peter J; Burley, Glenn; Dawson, Martin D; Laurand, Nicolas

    2014-01-01

    Bio-functionalised luminescent organic semiconductors are attractive for biophotonics because they can act as efficient laser materials while simultaneously interacting with molecules. In this paper, we present and discuss a laser biosensor platform that utilises a gain layer made of such an organic semiconductor material. The simple structure of the sensor and its operation principle are described. Nanolayer detection is shown experimentally and analysed theoretically in order to assess the potential and the limits of the biosensor. The advantage conferred by the organic semiconductor is explained, and comparisons to laser sensors using alternative dye-doped materials are made. Specific biomolecular sensing is demonstrated, and routes to functionalisation with nucleic acid probes, and future developments opened up by this achievement, are highlighted. Finally, attractive formats for sensing applications are mentioned, as well as colloidal quantum dots, which in the future could be used in conjunction with organic semiconductors.

  5. Design rules for biomolecular adhesion: lessons from force measurements.

    Science.gov (United States)

    Leckband, Deborah

    2010-01-01

    Cell adhesion to matrix, other cells, or pathogens plays a pivotal role in many processes in biomolecular engineering. Early macroscopic methods of quantifying adhesion led to the development of quantitative models of cell adhesion and migration. The more recent use of sensitive probes to quantify the forces that alter or manipulate adhesion proteins has revealed much greater functional diversity than was apparent from population average measurements of cell adhesion. This review highlights theoretical and experimental methods that identified force-dependent molecular properties that are central to the biological activity of adhesion proteins. Experimental and theoretical methods emphasized in this review include the surface force apparatus, atomic force microscopy, and vesicle-based probes. Specific examples given illustrate how these tools have revealed unique properties of adhesion proteins and their structural origins.

  6. Integration of biomolecular logic gates with field-effect transducers

    Energy Technology Data Exchange (ETDEWEB)

    Poghossian, A., E-mail: a.poghossian@fz-juelich.de [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Institute of Bio- and Nanosystems, Research Centre Juelich GmbH, D-52425 Juelich (Germany); Malzahn, K. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Abouzar, M.H. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Institute of Bio- and Nanosystems, Research Centre Juelich GmbH, D-52425 Juelich (Germany); Mehndiratta, P. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Katz, E. [Department of Chemistry and Biomolecular Science, NanoBio Laboratory (NABLAB), Clarkson University, Potsdam, NY 13699-5810 (United States); Schoening, M.J. [Institute of Nano- and Biotechnologies, Aachen University of Applied Sciences, Campus Juelich, Heinrich-Mussmann-Str. 1, D-52428 Juelich (Germany); Institute of Bio- and Nanosystems, Research Centre Juelich GmbH, D-52425 Juelich (Germany)

    2011-11-01

    Highlights: > Enzyme-based AND/OR logic gates are integrated with a capacitive field-effect sensor. > The AND/OR logic gates compose of multi-enzyme system immobilised on sensor surface. > Logic gates were activated by different combinations of chemical inputs (analytes). > The logic output (pH change) produced by the enzymes was read out by the sensor. - Abstract: The integration of biomolecular logic gates with field-effect devices - the basic element of conventional electronic logic gates and computing - is one of the most attractive and promising approaches for the transformation of biomolecular logic principles into macroscopically useable electrical output signals. In this work, capacitive field-effect EIS (electrolyte-insulator-semiconductor) sensors based on a p-Si-SiO{sub 2}-Ta{sub 2}O{sub 5} structure modified with a multi-enzyme membrane have been used for electronic transduction of biochemical signals processed by enzyme-based OR and AND logic gates. The realised OR logic gate composes of two enzymes (glucose oxidase and esterase) and was activated by ethyl butyrate or/and glucose. The AND logic gate composes of three enzymes (invertase, mutarotase and glucose oxidase) and was activated by two chemical input signals: sucrose and dissolved oxygen. The developed integrated enzyme logic gates produce local pH changes at the EIS sensor surface as a result of biochemical reactions activated by different combinations of chemical input signals, while the pH value of the bulk solution remains unchanged. The pH-induced charge changes at the gate-insulator (Ta{sub 2}O{sub 5}) surface of the EIS transducer result in an electronic signal corresponding to the logic output produced by the immobilised enzymes. The logic output signals have been read out by means of a constant-capacitance method.

  7. Integration of biomolecular logic gates with field-effect transducers

    International Nuclear Information System (INIS)

    Poghossian, A.; Malzahn, K.; Abouzar, M.H.; Mehndiratta, P.; Katz, E.; Schoening, M.J.

    2011-01-01

    Highlights: → Enzyme-based AND/OR logic gates are integrated with a capacitive field-effect sensor. → The AND/OR logic gates compose of multi-enzyme system immobilised on sensor surface. → Logic gates were activated by different combinations of chemical inputs (analytes). → The logic output (pH change) produced by the enzymes was read out by the sensor. - Abstract: The integration of biomolecular logic gates with field-effect devices - the basic element of conventional electronic logic gates and computing - is one of the most attractive and promising approaches for the transformation of biomolecular logic principles into macroscopically useable electrical output signals. In this work, capacitive field-effect EIS (electrolyte-insulator-semiconductor) sensors based on a p-Si-SiO 2 -Ta 2 O 5 structure modified with a multi-enzyme membrane have been used for electronic transduction of biochemical signals processed by enzyme-based OR and AND logic gates. The realised OR logic gate composes of two enzymes (glucose oxidase and esterase) and was activated by ethyl butyrate or/and glucose. The AND logic gate composes of three enzymes (invertase, mutarotase and glucose oxidase) and was activated by two chemical input signals: sucrose and dissolved oxygen. The developed integrated enzyme logic gates produce local pH changes at the EIS sensor surface as a result of biochemical reactions activated by different combinations of chemical input signals, while the pH value of the bulk solution remains unchanged. The pH-induced charge changes at the gate-insulator (Ta 2 O 5 ) surface of the EIS transducer result in an electronic signal corresponding to the logic output produced by the immobilised enzymes. The logic output signals have been read out by means of a constant-capacitance method.

  8. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  9. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  10. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    Science.gov (United States)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well

  11. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    Science.gov (United States)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and

  12. Modeling and Simulation of Multi-scale Environmental Systems with Generalized Hybrid Petri Nets

    Directory of Open Access Journals (Sweden)

    Mostafa eHerajy

    2015-07-01

    Full Text Available Predicting and studying the dynamics and properties of environmental systems necessitates the construction and simulation of mathematical models entailing different levels of complexities. Such type of computational experiments often require the combination of discrete and continuous variables as well as processes operating at different time scales. Furthermore, the iterative steps of constructing and analyzing environmental models might involve researchers with different background. Hybrid Petri nets may contribute in overcoming such challenges as they facilitate the implementation of systems integrating discrete and continuous dynamics. Additionally, the visual depiction of model components will inevitably help to bridge the gap between scientists with distinct expertise working on the same problem. Thus, modeling environmental systems with hybrid Petri nets enables the construction of complex processes while keeping the models comprehensible for researchers working on the same project with significantly divergent education path. In this paper we propose the utilization of a special class of hybrid Petri nets, Generalized Hybrid Petri Nets (GHPN, to model and simulate environmental systems exposing processes interacting at different time-scales. GHPN integrate stochastic and deterministic semantics as well as other types of special basic events. Moreover, a case study is presented to illustrate the use of GHPN in constructing and simulating multi-timescale environmental scenarios.

  13. Developments in regional scale simulation: modelling ecologically sustainable development in the Northern Territory

    International Nuclear Information System (INIS)

    Moffatt, I.

    1992-01-01

    This paper outlines one way in which researchers can make a positive methodological contribution to the debate on ecologically sustainable development (ESD) by integrating dynamic modelling and geographical information systems to form the basis for regional scale simulations. Some of the orthodox uses of Geographic Information System (GIS) are described and it is argued that most applications do not incorporate process based causal models. A description of a pilot study into developing a processed base model of ESD in the Northern Territory is given. This dynamic process based simulation model consists of two regions namely the 'Top End' and the 'Central' district. Each region consists of ten sub-sectors and the pattern of land use represents a common sector to both regions. The role of environmental defence expenditure, including environmental rehabilitation of uranium mines, in the model is noted. Similarly, it is hypothesized that the impact of exogenous changes such as the greenhouse effect and global economic fluctuations can have a differential impact on the behaviour of several sectors of the model. Some of the problems associated with calibrating and testing the model are reviewed. Finally, it is suggested that further refinement of this model can be achieved with the pooling of data sets and the development of PC based transputers for more detailed and accurate regional scale simulations. When fully developed it is anticipated that this pilot model can be of service to environmental managers and other groups involved in promoting ESD in the Northern Territory. 54 refs., 6 figs

  14. Large-Eddy Simulation of Waked Turbines in a Scaled Wind Farm Facility

    Science.gov (United States)

    Wang, J.; McLean, D.; Campagnolo, F.; Yu, T.; Bottasso, C. L.

    2017-05-01

    The aim of this paper is to present the numerical simulation of waked scaled wind turbines operating in a boundary layer wind tunnel. The simulation uses a LES-lifting-line numerical model. An immersed boundary method in conjunction with an adequate wall model is used to represent the effects of both the wind turbine nacelle and tower, which are shown to have a considerable effect on the wake behavior. Multi-airfoil data calibrated at different Reynolds numbers are used to account for the lift and drag characteristics at the low and varying Reynolds conditions encountered in the experiments. The present study focuses on low turbulence inflow conditions and inflow non-uniformity due to wind tunnel characteristics, while higher turbulence conditions are considered in a separate study. The numerical model is validated by using experimental data obtained during test campaigns conducted with the scaled wind farm facility. The simulation and experimental results are compared in terms of power capture, rotor thrust, downstream velocity profiles and turbulence intensity.

  15. Multi-Subband Ensemble Monte Carlo simulations of scaled GAA MOSFETs

    Science.gov (United States)

    Donetti, L.; Sampedro, C.; Ruiz, F. G.; Godoy, A.; Gamiz, F.

    2018-05-01

    We developed a Multi-Subband Ensemble Monte Carlo simulator for non-planar devices, taking into account two-dimensional quantum confinement. It couples self-consistently the solution of the 3D Poisson equation, the 2D Schrödinger equation, and the 1D Boltzmann transport equation with the Ensemble Monte Carlo method. This simulator was employed to study MOS devices based on ultra-scaled Gate-All-Around Si nanowires with diameters in the range from 4 nm to 8 nm with gate length from 8 nm to 14 nm. We studied the output and transfer characteristics, interpreting the behavior in the sub-threshold region and in the ON state in terms of the spatial charge distribution and the mobility computed with the same simulator. We analyzed the results, highlighting the contribution of different valleys and subbands and the effect of the gate bias on the energy and velocity profiles. Finally the scaling behavior was studied, showing that only the devices with D = 4nm maintain a good control of the short channel effects down to the gate length of 8nm .

  16. Immobilization of simulated high-level radioactive waste in borosilicate glass: Pilot scale demonstrations

    International Nuclear Information System (INIS)

    Ritter, J.A.; Hutson, N.D.; Zamecnik, J.R.; Carter, J.T.

    1991-01-01

    The Integrated DWPF Melter System (IDMS), operated by the Savannah River Laboratory, is a pilot scale facility used in support of the start-up and operation of the Department of Energy's Defense Waste Processing Facility. The IDMS has successfully demonstrated, on an engineering scale (one-fifth), that simulated high level radioactive waste (HLW) sludge can be chemically treated with formic acid to adjust both its chemical and physical properties, and then blended with simulated precipitate hydrolysis aqueous (PHA) product and borosilicate glass frit to produce a melter feed which can be processed into a durable glass product. The simulated sludge, PHA and frit were blended, based on a product composition program, to optimize the loading of the waste glass as well as to minimize those components which can cause melter processing and/or glass durability problems. During all the IDMS demonstrations completed thus far, the melter feed and the resulting glass that has been produced met all the required specifications, which is very encouraging to future DWPF operations. The IDMS operations also demonstrated that the volatile components of the melter feed (e.g., mercury, nitrogen and carbon, and, to a lesser extent, chlorine, fluorine and sulfur) did not adversely affect the melter performance or the glass product

  17. Kinetic turbulence simulations at extreme scale on leadership-class systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bei [Princeton Univ., Princeton, NJ (United States); Ethier, Stephane [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Tang, William [Princeton Univ., Princeton, NJ (United States); Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Williams, Timothy [Argonne National Lab. (ANL), Argonne, IL (United States); Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Madduri, Kamesh [The Pennsylvania State Univ., University Park, PA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCF and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).

  18. Simulating urban-scale air pollutants and their predicting capabilities over the Seoul metropolitan area.

    Science.gov (United States)

    Park, Il-Soo; Lee, Suk-Jo; Kim, Cheol-Hee; Yoo, Chul; Lee, Yong-Hee

    2004-06-01

    Urban-scale air pollutants for sulfur dioxide, nitrogen dioxide, particulate matter with aerodynamic diameter > or = 10 microm, and ozone (O3) were simulated over the Seoul metropolitan area, Korea, during the period of July 2-11, 2002, and their predicting capabilities were discussed. The Air Pollution Model (TAPM) and the highly disaggregated anthropogenic and the biogenic gridded emissions (1 km x 1 km) recently prepared by the Korean Ministry of Environment were applied. Wind fields with observational nudging in the prognostic meteorological model TAPM are optionally adopted to comparatively examine the meteorological impact on the prediction capabilities of urban-scale air pollutants. The result shows that the simulated concentrations of secondary air pollutant largely agree with observed levels with an index of agreement (IOA) of >0.6, whereas IOAs of approximately 0.4 are found for most primary pollutants in the major cities, reflecting the quality of emission data in the urban area. The observationally nudged wind fields with higher IOAs have little effect on the prediction for both primary and secondary air pollutants, implying that the detailed wind field does not consistently improve the urban air pollution model performance if emissions are not well specified. However, the robust highest concentrations are better described toward observations by imposing observational nudging, suggesting the importance of wind fields for the predictions of extreme concentrations such as robust highest concentrations, maximum levels, and >90th percentiles of concentrations for both primary and secondary urban-scale air pollutants.

  19. Performance of a pilot-scale constructed wetland system for treating simulated ash basin water.

    Science.gov (United States)

    Dorman, Lane; Castle, James W; Rodgers, John H

    2009-05-01

    A pilot-scale constructed wetland treatment system (CWTS) was designed and built to decrease the concentration and toxicity of constituents of concern in ash basin water from coal-burning power plants. The CWTS was designed to promote the following treatment processes for metals and metalloids: precipitation as non-bioavailable sulfides, co-precipitation with iron oxyhydroxides, and adsorption onto iron oxides. Concentrations of Zn, Cr, Hg, As, and Se in simulated ash basin water were reduced by the CWTS to less than USEPA-recommended water quality criteria. The removal efficiency (defined as the percent concentration decrease from influent to effluent) was dependent on the influent concentration of the constituent, while the extent of removal (defined as the concentration of a constituent of concern in the CWTS effluent) was independent of the influent concentration. Results from toxicity experiments illustrated that the CWTS eliminated influent toxicity with regard to survival and reduced influent toxicity with regard to reproduction. Reduction in potential for scale formation and biofouling was achieved through treatment of the simulated ash basin water by the pilot-scale CWTS.

  20. The cavitation erosion of ultrasonic sonotrode during large-scale metallic casting: Experiment and simulation.

    Science.gov (United States)

    Tian, Yang; Liu, Zhilin; Li, Xiaoqian; Zhang, Lihua; Li, Ruiqing; Jiang, Ripeng; Dong, Fang

    2018-05-01

    Ultrasonic sonotrodes play an essential role in transmitting power ultrasound into the large-scale metallic casting. However, cavitation erosion considerably impairs the in-service performance of ultrasonic sonotrodes, leading to marginal microstructural refinement. In this work, the cavitation erosion behaviour of ultrasonic sonotrodes in large-scale castings was explored using the industry-level experiments of Al alloy cylindrical ingots (i.e. 630 mm in diameter and 6000 mm in length). When introducing power ultrasound, severe cavitation erosion was found to reproducibly occur at some specific positions on ultrasonic sonotrodes. However, there is no cavitation erosion present on the ultrasonic sonotrodes that were not driven by electric generator. Vibratory examination showed cavitation erosion depended on the vibration state of ultrasonic sonotrodes. Moreover, a finite element (FE) model was developed to simulate the evolution and distribution of acoustic pressure in 3-D solidification volume. FE simulation results confirmed that significant dynamic interaction between sonotrodes and melts only happened at some specific positions corresponding to severe cavitation erosion. This work will allow for developing more advanced ultrasonic sonotrodes with better cavitation erosion-resistance, in particular for large-scale castings, from the perspectives of ultrasonic physics and mechanical design. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balances

    Directory of Open Access Journals (Sweden)

    G. Tang

    2012-08-01

    Full Text Available Satellite-based data, such as vegetation type and fractional vegetation cover, are widely used in hydrologic models to prescribe the vegetation state in a study region. Dynamic global vegetation models (DGVM simulate land surface hydrology. Incorporation of satellite-based data into a DGVM may enhance a model's ability to simulate land surface hydrology by reducing the task of model parameterization and providing distributed information on land characteristics. The objectives of this study are to (i modify a DGVM for simulating land surface water balances; (ii evaluate the modified model in simulating actual evapotranspiration (ET, soil moisture, and surface runoff at regional or watershed scales; and (iii gain insight into the ability of both the original and modified model to simulate large spatial scale land surface hydrology. To achieve these objectives, we introduce the "LPJ-hydrology" (LH model which incorporates satellite-based data into the Lund-Potsdam-Jena (LPJ DGVM. To evaluate the model we ran LH using historical (1981–2006 climate data and satellite-based land covers at 2.5 arc-min grid cells for the conterminous US and for the entire world using coarser climate and land cover data. We evaluated the simulated ET, soil moisture, and surface runoff using a set of observed or simulated data at different spatial scales. Our results demonstrate that spatial patterns of LH-simulated annual ET and surface runoff are in accordance with previously published data for the US; LH-modeled monthly stream flow for 12 major rivers in the US was consistent with observed values respectively during the years 1981–2006 (R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficient > 0.52. The modeled mean annual discharges for 10 major rivers worldwide also agreed well (differences < 15% with observed values for these rivers. Compared to a degree-day method for snowmelt computation, the addition of the solar radiation effect on snowmelt

  2. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Hoshi, T [Department of Applied Mathematics and Physics, Tottori University, Tottori 680-8550 (Japan); Fujiwara, T [Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST-JST) (Japan)

    2009-02-11

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  3. Simulation for scale-up of a confined jet mixer for continuous hydrothermal flow synthesis of nanomaterials

    OpenAIRE

    Ma, CY; Liu, JJ; Zhang, Y; Wang, XZ

    2015-01-01

    Reactor performance of confined jet mixers for continuous hydrothermal flow synthesis of nanomaterials is investigated for the purpose of scale-up from laboratory scale to pilot-plant scale. Computational fluid dynamics (CFD) models were applied to simulate hydrothermal fluid flow, mixing and heat transfer behaviours in the reactors at different volumetric scale-up ratios (up to 26 times). The distributions of flow and heat transfer variables were obtained using ANSYS Fluent with the tracer c...

  4. Hydrological simulations driven by RCM climate scenarios at basin scale in the Po River, Italy

    Directory of Open Access Journals (Sweden)

    R. Vezzoli

    2014-09-01

    Full Text Available River discharges are the main expression of the hydrological cycle and are the results of climate natural variability. The signal of climate changes occurrence raises the question of how it will impact on river flows and on their extreme manifestations: floods and droughts. This question can be addressed through numerical simulations spanning from the past (1971 to future (2100 under different climate change scenarios. This work addresses the capability of a modelling chain to reproduce the observed discharge of the Po River over the period 1971–2000. The modelling chain includes climate and hydrological/hydraulic models and its performance is evaluated through indices based on the flow duration curve. The climate datasets used for the 1971–2000 period are (a a high resolution observed climate dataset, and COSMO-CLM regional climate model outputs with (b perfect boundary condition, ERA40 Reanalysis, and (c suboptimal boundary conditions provided by the global climate model CMCC–CM. The aim of the different simulations is to evaluate how the uncertainties introduced by the choice of the regional and/or global climate models propagate in the simulated discharges. This point is relevant to interpret the results of the simulated discharges when scenarios for the future are considered. The hydrological/hydraulic components are simulated through a physically-based distributed model (TOPKAPI and a water balance model at the basin scale (RIBASIM. The aim of these first simulations is to quantify the uncertainties introduced by each component of the modelling chain and their propagation. Estimation of the overall uncertainty is relevant to correctly understand the future river flow regimes. The results show how bias correction algorithms can help in reducing the overall uncertainty associated to the different stages of the modelling chain.

  5. Study of elemental mercury re-emission through a lab-scale simulated scrubber

    Energy Technology Data Exchange (ETDEWEB)

    Cheng-Li Wu; Yan Cao; Cheng-Chun He; Zhong-Bing Dong; Wei-Ping Pan [Western Kentucky University, KY (United States). Institute for Combustion Science and Environmental Technology

    2010-08-15

    This paper describes a lab-scale simulated scrubber that was designed and built in the laboratory at Western Kentucky University's Institute for Combustion Science and Environmental Technology. A series of tests on slurries of CaO, CaSO{sub 3}, CaSO{sub 4}/CaSO{sub 3} and Na{sub 2}SO{sub 3} were carried out to simulate recirculating slurries in different oxidation modes. Elemental mercury (Hg{sup 0}) re-emission was replicated through the simulated scrubber. The relationship between the oxidation-reduction potential (ORP) of the slurries and the Hg0 re-emissions was evaluated. Elemental mercury re-emission occurred when Hg{sup 2+} that was absorbed in the simulated scrubber was converted to Hg{sup 0}; then, Hg{sup 0} was emitted from the slurry together with the carrier gas. The effects of both the reagents and the operational conditions (including the temperature, pH, and oxygen concentrations in the carrier gas) on the Hg{sup 0} re-emission rates in the simulated scrubber were investigated. The results indicated that as the operational temperature of the scrubber and the pH value of the slurry increased, the Hg{sup 0} concentrations that were emitted from the simulated scrubber increased. The Hg{sup 0} re-emission rates decreased as the O{sub 2} concentration in the carrier gas increased. In addition, the effects of additives to suppress Hg{sup 0} re-emission were evaluated in this paper. Sodium tetrasulfide, TMT 15, NaHS and HI were added to the slurry, while Hg{sup 2+}, which was absorbed in the slurry, was retained in the slurry as mercury precipitates. Therefore, there was a significant capacity for the additives to suppress Hg{sup 0} re-emission. 11 refs., 11 figs., 5 tabs.

  6. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  7. Proceedings of the international advisory committee on 'biomolecular dynamics instrument DNA' and the workshop on 'biomolecular dynamics backscattering spectrometers'

    International Nuclear Information System (INIS)

    Arai, Masatoshi; Aizawa, Kazuya; Nakajima, Kenji; Shibata, Kaoru; Takahashi, Nobuaki

    2008-08-01

    A workshop entitled 'Biomolecular Dynamics Backscattering Spectrometers' was held on February 27th - 29th, 2008 at J-PARC Center, Japan Atomic Energy Agency. This workshop was planned to be held for aiming to realize an innovative neutron backscattering instrument, namely DNA, in the MLF and thus four leading scientists in the field of neutron backscattering instruments were invited as the International Advisory Committee (IAC member: Dr. Dan Neumann (Chair); Prof. Ferenc Mezei; Dr. Hannu Mutka; Dr. Philip Tregenna-Piggott) for DNA from institutes in the United States, France and Switzerland, where backscattering instruments are in-service. It was therefore held in the form of lecture anterior and then in the form of the committee posterior. This report includes the executive summary of the IAC and materials of the presentations in the IAC and the workshop. (author)

  8. Numerical atomic scale simulations of the microstructural evolution of ferritic alloys under irradiation

    International Nuclear Information System (INIS)

    Vincent, E.

    2006-12-01

    In this work, we have developed a model of point defect (vacancies and interstitials) diffusion whose aim is to simulate by kinetic Monte Carlo (KMC) the formation of solute rich clusters observed experimentally in irradiated FeCuNiMnSi model alloys and in pressure vessel steels. Electronic structure calculations have been used to characterize the interactions between point defects and the different solute atoms. Each of these solute atoms establishes an attractive bond with the vacancy. As for Mn, which is the element which has the weakest bond with the vacancy, it establishes more favourable bonds with interstitials. Binding energies, migration energies as well as other atomic scale properties, determined by ab initio calculations, have led to a parameter set for the KMC code. Firstly, these parameters have been optimised on thermal ageing experiments realised on the FeCu binary alloy and on complex alloys, described in the literature. The vacancy diffusion thermal annealing simulations show that when a vacancy is available, all the solutes migrate and form clusters, in agreement with the observed experimental tendencies. Secondly, to simulate the microstructural evolution under irradiation, we have introduced interstitials in the KMC code. Their presence leads to a more efficient transport of Mn. The first simulations of electron and neutron irradiations show that the model results are globally qualitatively coherent with the experimentally observed tendencies. (author)

  9. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  10. The UP modelling system for large scale hydrology: simulation of the Arkansas-Red River basin

    Directory of Open Access Journals (Sweden)

    C. G. Kilsby

    1999-01-01

    Full Text Available The UP (Upscaled Physically-based hydrological modelling system to the Arkansas-Red River basin (USA is designed for macro-scale simulations of land surface processes, and aims for a physical basis and, avoids the use of discharge records in the direct calibration of parameters. This is achieved in a two stage process: in the first stage parametrizations are derived from detailed modelling of selected representative small and then used in a second stage in which a simple distributed model is used to simulate the dynamic behaviour of the whole basin. The first stage of the process is described in a companion paper (Ewen et al., this issue, and the second stage of this process is described here. The model operated at an hourly time-step on 17-km grid squares for a two year simulation period, and represents all the important hydrological processes including regional aquifer recharge, groundwater discharge, infiltration- and saturation-excess runoff, evapotranspiration, snowmelt, overland and channel flow. Outputs from the model are discussed, and include river discharge at gauging stations and space-time fields of evaporation and soil moisture. Whilst the model efficiency assessed by comparison of simulated and observed discharge records is not as good as could be achieved with a model calibrated against discharge, there are considerable advantages in retaining a physical basis in applications to ungauged river basins and assessments of impacts of land use or climate change.

  11. Simulating Fine-Scale Marine Pollution Plumes for Autonomous Robotic Environmental Monitoring

    Directory of Open Access Journals (Sweden)

    Muhammad Fahad

    2018-05-01

    Full Text Available Marine plumes exhibit characteristics such as intermittency, sinuous structure, shape and flow field coherency, and a time varying concentration profile. Due to the lack of experimental quantification of these characteristics for marine plumes, existing work often assumes marine plumes exhibit behavior similar to aerial plumes and are commonly modeled by filament based Lagrangian models. Our previous field experiments with Rhodamine dye plumes at Makai Research Pier at Oahu, Hawaii revealed that marine plumes show similar characteristics to aerial plumes qualitatively, but quantitatively they are disparate. Based on the field data collected, this paper presents a calibrated Eulerian plume model that exhibits the qualitative and quantitative characteristics exhibited by experimentally generated marine plumes. We propose a modified model with an intermittent source, and implement it in a Robot Operating System (ROS based simulator. Concentration time series of stationary sampling points and dynamic sampling points across cross-sections and plume fronts are collected and analyzed for statistical parameters of the simulated plume. These parameters are then compared with statistical parameters from experimentally generated plumes. The comparison validates that the simulated plumes exhibit fine-scale qualitative and quantitative characteristics similar to experimental plumes. The ROS plume simulator facilitates future evaluations of environmental monitoring strategies by marine robots, and is made available for community use.

  12. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    Science.gov (United States)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  13. Large Scale Beam-beam Simulations for the CERN LHC using Distributed Computing

    CERN Document Server

    Herr, Werner; McIntosh, E; Schmidt, F

    2006-01-01

    We report on a large scale simulation of beam-beam effects for the CERN Large Hadron Collider (LHC). The stability of particles which experience head-on and long-range beam-beam effects was investigated for different optical configurations and machine imperfections. To cover the interesting parameter space required computing resources not available at CERN. The necessary resources were available in the LHC@home project, based on the BOINC platform. At present, this project makes more than 60000 hosts available for distributed computing. We shall discuss our experience using this system during a simulation campaign of more than six months and describe the tools and procedures necessary to ensure consistent results. The results from this extended study are presented and future plans are discussed.

  14. A Pore Scale Flow Simulation of Reconstructed Model Based on the Micro Seepage Experiment

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-01-01

    Full Text Available Researches on microscopic seepage mechanism and fine description of reservoir pore structure play an important role in effective development of low and ultralow permeability reservoir. The typical micro pore structure model was established by two ways of the conventional model reconstruction method and the built-in graphics function method of Comsol® in this paper. A pore scale flow simulation was conducted on the reconstructed model established by two different ways using creeping flow interface and Brinkman equation interface, respectively. The results showed that the simulation of the two models agreed well in the distribution of velocity, pressure, Reynolds number, and so on. And it verified the feasibility of the direct reconstruction method from graphic file to geometric model, which provided a new way for diversifying the numerical study of micro seepage mechanism.

  15. Verification of frequency scaling laws for capacitive radio-frequency discharges using two-dimensional simulations

    International Nuclear Information System (INIS)

    Vahedi, V.; Birdsall, C.K.; Lieberman, M.A.; DiPeso, G.; Rognlien, T.D.

    1993-01-01

    Weakly ionized processing plasmas are studied in two dimensions using a bounded particle-in-cell (PIC) simulation code with a Monte Carlo collision (MCC) package. The MCC package models the collisions between charged and neutral particles, which are needed to obtain a self-sustained plasma and the proper electron and ion energy loss mechanisms. A two-dimensional capacitive radio-frequency (rf) discharge is investigated in detail. Simple frequency scaling laws for predicting the behavior of some plasma parameters are derived and then compared with simulation results, finding good agreements. It is found that as the drive frequency increases, the sheath width decreases, and the bulk plasma becomes more uniform, leading to a reduction of the ion angular spread at the target and an improvement of ion dose uniformity at the driven electrode

  16. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  17. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Corsini, Niccolò R. C., E-mail: niccolo.corsini@imperial.ac.uk; Greco, Andrea; Haynes, Peter D. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Hine, Nicholas D. M. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Cavendish Laboratory, J. J. Thompson Avenue, Cambridge CB3 0HE (United Kingdom); Molteni, Carla [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett.94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  18. Mechanical properties of granular materials: A variational approach to grain-scale simulations

    Energy Technology Data Exchange (ETDEWEB)

    Holtzman, R.; Silin, D.B.; Patzek, T.W.

    2009-01-15

    The mechanical properties of cohesionless granular materials are evaluated from grain-scale simulations. A three-dimensional pack of spherical grains is loaded by incremental displacements of its boundaries. The deformation is described as a sequence of equilibrium configurations. Each configuration is characterized by a minimum of the total potential energy. This minimum is computed using a modification of the conjugate gradient algorithm. Our simulations capture the nonlinear, path-dependent behavior of granular materials observed in experiments. Micromechanical analysis provides valuable insight into phenomena such as hysteresis, strain hardening and stress-induced anisotropy. Estimates of the effective bulk modulus, obtained with no adjustment of material parameters, are in agreement with published experimental data. The model is applied to evaluate the effects of hydrate dissociation in marine sediments. Weakening of the sediment is quantified as a reduction in the effective elastic moduli.

  19. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    Science.gov (United States)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  20. Modeling Coronal Mass Ejections with the Multi-Scale Fluid-Kinetic Simulation Suite

    International Nuclear Information System (INIS)

    Pogorelov, N. V.; Borovikov, S. N.; Wu, S. T.; Yalim, M. S.; Kryukov, I. A.; Colella, P. C.; Van Straalen, B.

    2017-01-01

    The solar eruptions and interacting solar wind streams are key drivers of geomagnetic storms and various related space weather disturbances that may have hazardous effects on the space-borne and ground-based technological systems as well as on human health. Coronal mass ejections (CMEs) and their interplanetary counterparts, interplanetary CMEs (ICMEs), belong to the strongest disturbances and therefore are of great importance for the space weather predictions. In this paper we show a few examples of how adaptive mesh refinement makes it possible to resolve the complex CME structure and its evolution in time while a CME propagates from the inner boundary to Earth. Simulations are performed with the Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS). (paper)

  1. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  2. Unsteady aerodynamics simulation of a full-scale horizontal axis wind turbine using CFD methodology

    International Nuclear Information System (INIS)

    Cai, Xin; Gu, Rongrong; Pan, Pan; Zhu, Jie

    2016-01-01

    Highlights: • A full-scale HAWT is simulated under operational conditions of wind shear and yaw. • The CFD method and sliding mesh are adopted to complete the calculation. • Thrust and torque of blades reach the peak and valley at the same time in wind shear. • The wind turbine produces yaw moment during the whole revolution in yaw case. • The torques and thrusts of the three blades present cyclical changes. - Abstract: The aerodynamic performance of wind turbines is significantly influenced by the unsteady flow around the rotor blades. The research on unsteady aerodynamics for Horizontal Axis Wind Turbines (HAWTs) is still poorly understood because of the complex flow physics. In this study, the unsteady aerodynamic configuration of a full-scale HAWT is simulated with consideration of wind shear, tower shadow and yaw motion. The calculated wind turbine which contains tapered tower, rotor overhang and tilted rotor shaft is constructed by making reference of successfully commercial operated wind turbine designed by NEG Micon and Vestas. A validated CFD method is utilized to analyze unsteady aerodynamic characteristics which affect the performance on such a full-scale HAWT. The approach of sliding mesh is used to carefully deal with the interface between static and moving parts in the flow field. The annual average wind velocity and wind profile in the atmospheric border are applied as boundary conditions. Considering the effects of wind shear and tower shadow, the simulation results show that the each blade reaches its maximum and minimum aerodynamic loads almost at the same time during the rotation circle. The blade–tower interaction imposes great impact on the power output performance. The wind turbine produces yaw moment during the whole revolution and the maximum aerodynamic loads appear at the upwind azimuth in the yaw computation case.

  3. Use of a large-scale rainfall simulator reveals novel insights into stemflow generation

    Science.gov (United States)

    Levia, D. F., Jr.; Iida, S. I.; Nanko, K.; Sun, X.; Shinohara, Y.; Sakai, N.

    2017-12-01

    Detailed knowledge of stemflow generation and its effects on both hydrological and biogoechemical cycling is important to achieve a holistic understanding of forest ecosystems. Field studies and a smaller set of experiments performed under laboratory conditions have increased our process-based knowledge of stemflow production. Building upon these earlier works, a large-scale rainfall simulator was employed to deepen our understanding of stemflow generation processes. The use of the large-scale rainfall simulator provides a unique opportunity to examine a range of rainfall intensities under constant conditions that are difficult under natural conditions due to the variable nature of rainfall intensities in the field. Stemflow generation and production was examined for three species- Cryptomeria japonica D. Don (Japanese cedar), Chamaecyparis obtusa (Siebold & Zucc.) Endl. (Japanese cypress), Zelkova serrata Thunb. (Japanese zelkova)- under both leafed and leafless conditions at several different rainfall intensities (15, 20, 30, 40, 50, and 100 mm h-1) using a large-scale rainfall simulator in National Research Institute for Earth Science and Disaster Resilience (Tsukuba, Japan). Stemflow production and rates and funneling ratios were examined in relation to both rainfall intensity and canopy structure. Preliminary results indicate a dynamic and complex response of the funneling ratios of individual trees to different rainfall intensities among the species examined. This is partly the result of different canopy structures, hydrophobicity of vegetative surfaces, and differential wet-up processes across species and rainfall intensities. This presentation delves into these differences and attempts to distill them into generalizable patterns, which can advance our theories of stemflow generation processes and ultimately permit better stewardship of forest resources. ________________ Funding note: This research was supported by JSPS Invitation Fellowship for Research in

  4. Mixing of Process Heels, Process Solutions and Recycle Streams: Small-Scale Simulant

    International Nuclear Information System (INIS)

    Kaplan, D.I.

    2001-01-01

    The overall objective of this small-scale simulant mixing study was to identify the processes within the Hanford Site River Protection Project - Waste Treatment Plant (RPP-WTP) that may generate precipitates and to identify the types of precipitates formed. This information can be used to identify where mixtures of various solutions will cause precipitation of solids, potentially causing operational problems such as fouling equipment or increasing the amount of High Level Waste glass produced. Having this information will help guide protocols for flushing or draining tanks, mixing internal recycle streams, and mixing waste tank supernates. This report contains the discussion and thermodynamic chemical speciation modeling of the raw data

  5. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  6. Simulating Rayleigh-Taylor (RT) instability using PPM hydrodynamics @scale on Roadrunner (u)

    Energy Technology Data Exchange (ETDEWEB)

    Woodward, Paul R [Los Alamos National Laboratory; Dimonte, Guy [Los Alamos National Laboratory; Rockefeller, Gabriel M [Los Alamos National Laboratory; Fryer, Christopher L [Los Alamos National Laboratory; Dimonte, Guy [Los Alamos National Laboratory; Dai, W [Los Alamos National Laboratory; Kares, R. J. [Los Alamos National Laboratory

    2011-01-05

    The effect of initial conditions on the self-similar growth of the RT instability is investigated using a hydrodynamics code based on the piecewise-parabolic-method (PPM). The PPM code was converted to the hybrid architecture of Roadrunner in order to perform the simulations at extremely high speed and spatial resolution. This paper describes the code conversion to the Cell processor, the scaling studies to 12 CU's on Roadrunner and results on the dependence of the RT growth rate on initial conditions. The relevance of the Roadrunner implementation of this PPM code to other existing and anticipated computer architectures is also discussed.

  7. Symplectic integrators for large scale molecular dynamics simulations: A comparison of several explicit methods

    International Nuclear Information System (INIS)

    Gray, S.K.; Noid, D.W.; Sumpter, B.G.

    1994-01-01

    We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

  8. Hanford Waste Vitrification Plant full-scale feed preparation testing with water and process simulant slurries

    International Nuclear Information System (INIS)

    Gaskill, J.R.; Larson, D.E.; Abrigo, G.P.

    1996-03-01

    The Hanford Waste Vitrification Plant was intended to convert selected, pretreated defense high-level waste and transuranic waste from the Hanford Site into a borosilicate glass. A full-scale testing program was conducted with nonradioactive waste simulants to develop information for process and equipment design of the feed-preparation system. The equipment systems tested included the Slurry Receipt and Adjustment Tank, Slurry Mix Evaporator, and Melter-Feed Tank. The areas of data generation included heat transfer (boiling, heating, and cooling), slurry mixing, slurry pumping and transport, slurry sampling, and process chemistry. 13 refs., 129 figs., 68 tabs

  9. Pilot scale processing of simulated Savannah River Site high level radioactive waste

    International Nuclear Information System (INIS)

    Hutson, N.D.; Zamecnik, J.R.; Ritter, J.A.; Carter, J.T.

    1991-01-01

    The Savannah River Laboratory operates the Integrated DWPF Melter System (IDMS), which is a pilot-scale test facility used in support of the start-up and operation of the US Department of Energy's Defense Waste Processing Facility (DWPF). Specifically, the IDMS is used in the evaluation of the DWPF melter and its associated feed preparation and offgass treatment systems. This article provides a general overview of some of the test work which has been conducted in the IDMS facility. The chemistry associated with the chemical treatment of the sludge (via formic acid adjustment) is discussed. Operating experiences with simulated sludge containing high levels of nitrite, mercury, and noble metals are summarized

  10. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  11. Local-Scale Simulations of Nucleate Boiling on Micrometer Featured Surfaces: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dede, Ercan M. [Toyota Research Institute of North America; Joshi, Shailesh N. [Toyota Research Institute of North America; Zhou, Feng [Toyota Research Institute of North America

    2017-08-03

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  12. Local-Scale Simulations of Nucleate Boiling on Micrometer-Featured Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dede, Ercan M. [Toyota Research Institute of North America; Joshi, Shailesh N. [Toyota Research Institute of North America; Zhou, Feng [Toyota Research Institute of North America

    2017-07-12

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  13. Full-scale borehole sealing test in salt under simulated downhole conditions. Volume 2

    International Nuclear Information System (INIS)

    Scheetz, B.E.; Licastro, P.H.; Roy, D.M.

    1986-05-01

    Large-scale testing of the permeability by brine of a salt/grout sample designed to simulate a borehole plug was conducted. The results of these tests showed that a quantity of fluid equivalent to a permeability of 3 microdarcys was collected during the course of the test. This flow rate was used to estimate the smooth bore aperture. Details of this test ware presented in Volume 1 of this report. This report, Volume 2, covers post-test characterization including a detailed study of the salt/grout interface, as well as determination of the physical/mechanical properties of grout samples molded at Terra Tek, Inc. at the time of the large-scale test. Additional studies include heat of hydration, radial stress, and longitudinal volume changes for an equivalent grout mixture

  14. Nano-scale structure in membranes in relation to enzyme action - computer simulation vs. experiment

    DEFF Research Database (Denmark)

    Høyrup, P.; Jørgensen, Kent; Mouritsen, O.G.

    2002-01-01

    There is increasing theoretical and experimental evidence indicating that small-scale domain structure and dynamical heterogeneity develop in lipid membranes as a consequence of the the underlying phase transitions and the associated density and composition fluctuations. The relevant coherence...... lengths are in the nano-meter range. The nano-scale structure is believed to be important for controlling the activity of enzymes, specifically phospholipases, which act at bilayer membranes. We propose here a lattice-gas statistical mechanical model with appropriate dynamics to account for the non......-equilibrium action of the enzyme phospholipase A(2) which hydrolyses lipid-bilayer substrates. The resulting product molecules are assumed to induce local variations in the membrane interfacial pressure. Monte Carlo simulations of the non-equilibrium properties of the model for one-component as well as binary lipid...

  15. Particle simulation of pedestal buildup and study of pedestal scaling law in a quiescent plasma edge

    International Nuclear Information System (INIS)

    Chang, C.S.; Ku, S.; Weitzner, H.; Groebner, R.; Osborne, T.

    2005-01-01

    A discrete guiding-center particle code XGC (X-point included Guiding Center code) is used to study pedestal buildup and sheared E r formation in a quiescent plasma edge of a diverted tokamak. A neoclassical pedestal scaling law has been deduced, which shows that the density pedestal width is proportional to T i 1/2 M 1/2 /B t where T i is the ion temperature, M is ion mass and B t is the toroidal magnetic field. Dependence on the pedestal density or the poloidal magnetic field is found to be much weaker. Ion temperature pedestal is not as well defined as the density pedestal. Neoclassical electron transport rate, including the collisional heat exchange rate with ions, is too slow to be considered in the time scale of simulation (∼ 10 ms). (author)

  16. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    Science.gov (United States)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  17. Scaling up watershed model parameters--Flow and load simulations of the Edisto River Basin

    Science.gov (United States)

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    The Edisto River is the longest and largest river system completely contained in South Carolina and is one of the longest free flowing blackwater rivers in the United States. The Edisto River basin also has fish-tissue mercury concentrations that are some of the highest recorded in the United States. As part of an effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River basin, analyses and simulations of the hydrology of the Edisto River basin were made with the topography-based hydrological model (TOPMODEL). The potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River basin, was assessed. Scaling up was done in a step-wise process beginning with applying the calibration parameters, meteorological data, and topographic wetness index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made with subsequent simulations culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River basin and updated calibration parameters for some of the TOPMODEL calibration parameters. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the two models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the significant difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variables in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD-H, and LOADEST

  18. Simulation of pesticide dissipation in soil at the catchment scale over 23 years

    Science.gov (United States)

    Queyrel, Wilfried; Florence, Habets; Hélène, Blanchoud; Céline, Schott; Laurine, Nicola

    2014-05-01

    Pesticide applications lead to contamination risks of environmental compartments causing harmful effects on water resource used for drinking water. Pesticide fate modeling is assumed to be a relevant approach to study pesticide dissipation at the catchment scale. Simulations of five herbicides (atrazine, simazine, isoproturon, chlortoluron, metolachor) and one metabolite (DEA) were carried out with the crop model STICS over a 23-year period (1990-2012). The model application was performed using real agricultural practices over a small rural catchment (104 km²) located at 60km east from Paris (France). Model applications were established for two crops: wheat and maize. The objectives of the study were i) to highlight the main processes implied in pesticide fate and transfer at long-term; ii) to assess the influence of dynamics of the remaining mass of pesticide in soil on transfer; iii) to determine the most sensitive parameters related to pesticide losses by leaching over a 23-year period. The simulated data related to crop yield, water transfer, nitrates and pesticide concentrations were first compared to observations over the 23-year period, when measurements were available at the catchment scale. Then, the evaluation of the main processes related to pesticide fate and transfer was performed using long-term simulations at a yearly time step and monthly average variations. Analyses of the monthly average variations were oriented on the impact of pesticide application, water transfer and pesticide transformation on pesticide leaching. The evolution of the remaining mass of pesticide in soil, including the mobile phase (the liquid phase) and non-mobile (adsorbed at equilibrium and non-equilibrium), was studied to evaluate the impact of pesticide stored in soil on the fraction available for leaching. Finally, a sensitivity test was performed to evaluate the more sensitive parameters regarding the remaining mass of pesticide in soil and leaching. The findings of the

  19. The Study of Non-Linear Acceleration of Particles during Substorms Using Multi-Scale Simulations

    International Nuclear Information System (INIS)

    Ashour-Abdalla, Maha

    2011-01-01

    To understand particle acceleration during magnetospheric substorms we must consider the problem on multple scales ranging from the large scale changes in the entire magnetosphere to the microphysics of wave particle interactions. In this paper we present two examples that demonstrate the complexity of substorm particle acceleration and its multi-scale nature. The first substorm provided us with an excellent example of ion acceleration. On March 1, 2008 four THEMIS spacecraft were in a line extending from 8 R E to 23 R E in the magnetotail during a very large substorm during which ions were accelerated to >500 keV. We used a combination of a global magnetohydrodynamic and large scale kinetic simulations to model the ion acceleration and found that the ions gained energy by non-adiabatic trajectories across the substorm electric field in a narrow region extending across the magnetotail between x = -10 R E and x = -15 R E . In this strip called the 'wall region' the ions move rapidly in azimuth and gain 100s of keV. In the second example we studied the acceleration of electrons associated with a pair of dipolarization fronts during a substorm on February 15, 2008. During this substorm three THEMIS spacecraft were grouped in the near-Earth magnetotail (x ∼-10 R E ) and observed electron acceleration of >100 keV accompanied by intense plasma waves. We used the MHD simulations and analytic theory to show that adiabatic motion (betatron and Fermi acceleration) was insufficient to account for the electron acceleration and that kinetic processes associated with the plasma waves were important.

  20. Cosmological hydrodynamical simulations of galaxy clusters: X-ray scaling relations and their evolution

    Science.gov (United States)

    Truong, N.; Rasia, E.; Mazzotta, P.; Planelles, S.; Biffi, V.; Fabjan, D.; Beck, A. M.; Borgani, S.; Dolag, K.; Gaspari, M.; Granato, G. L.; Murante, G.; Ragone-Figueroa, C.; Steinborn, L. K.

    2018-03-01

    We analyse cosmological hydrodynamical simulations of galaxy clusters to study the X-ray scaling relations between total masses and observable quantities such as X-ray luminosity, gas mass, X-ray temperature, and YX. Three sets of simulations are performed with an improved version of the smoothed particle hydrodynamics GADGET-3 code. These consider the following: non-radiative gas, star formation and stellar feedback, and the addition of feedback by active galactic nuclei (AGN). We select clusters with M500 > 1014 M⊙E(z)-1, mimicking the typical selection of Sunyaev-Zeldovich samples. This permits to have a mass range large enough to enable robust fitting of the relations even at z ˜ 2. The results of the analysis show a general agreement with observations. The values of the slope of the mass-gas mass and mass-temperature relations at z = 2 are 10 per cent lower with respect to z = 0 due to the applied mass selection, in the former case, and to the effect of early merger in the latter. We investigate the impact of the slope variation on the study of the evolution of the normalization. We conclude that cosmological studies through scaling relations should be limited to the redshift range z = 0-1, where we find that the slope, the scatter, and the covariance matrix of the relations are stable. The scaling between mass and YX is confirmed to be the most robust relation, being almost independent of the gas physics. At higher redshifts, the scaling relations are sensitive to the inclusion of AGNs which influences low-mass systems. The detailed study of these objects will be crucial to evaluate the AGN effect on the ICM.

  1. Scale-dependent performances of CMIP5 earth system models in simulating terrestrial vegetation carbon

    Science.gov (United States)

    Jiang, L.; Luo, Y.; Yan, Y.; Hararuk, O.

    2013-12-01

    Mitigation of global changes will depend on reliable projection for the future situation. As the major tools to predict future climate, Earth System Models (ESMs) used in Coupled Model Intercomparison Project Phase 5 (CMIP5) for the IPCC Fifth Assessment Report have incorporated carbon cycle components, which account for the important fluxes of carbon between the ocean, atmosphere, and terrestrial biosphere carbon reservoirs; and therefore are expected to provide more detailed and more certain projections. However, ESMs are never perfect; and evaluating the ESMs can help us to identify uncertainties in prediction and give the priorities for model development. In this study, we benchmarked carbon in live vegetation in the terrestrial ecosystems simulated by 19 ESMs models from CMIP5 with an observationally estimated data set of global carbon vegetation pool 'Olson's Major World Ecosystem Complexes Ranked by Carbon in Live Vegetation: An Updated Database Using the GLC2000 Land Cover Product' by Gibbs (2006). Our aim is to evaluate the ability of ESMs to reproduce the global vegetation carbon pool at different scales and what are the possible causes for the bias. We found that the performance CMIP5 ESMs is very scale-dependent. While CESM1-BGC, CESM1-CAM5, CESM1-FASTCHEM and CESM1-WACCM, and NorESM1-M and NorESM1-ME (they share the same model structure) have very similar global sums with the observation data but they usually perform poorly at grid cell and biome scale. In contrast, MIROC-ESM and MIROC-ESM-CHEM simulate the best on at grid cell and biome scale but have larger differences in global sums than others. Our results will help improve CMIP5 ESMs for more reliable prediction.

  2. INVESTIGATING SUSPENSION OF MST, CST, AND SIMULATED SLUDGE SLURRIES IN A PILOT-SCALE WASTE TANK

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M.; Qureshi, Z.; Restivo, M.; Steeper, T.; Williams, M.

    2011-05-24

    The Small Column Ion Exchange (SCIX) process is being developed to remove cesium, strontium, and actinides from Savannah River Site (SRS) Liquid Waste using an existing waste tank (i.e., Tank 41H) to house the process. Savannah River National Laboratory (SRNL) is conducting pilot-scale mixing tests to determine the pump requirements for suspending and resuspending monosodium titanate (MST), crystalline silicotitanate (CST), and simulated sludge. The purpose of this pilot scale testing is for the pumps to resuspend the MST, CST, and simulated sludge particles so that they can be removed from the tank, and to suspend the MST so it can contact strontium and actinides. The pilot-scale tank is a 1/10.85 linear scaled model of Tank 41H. The tank diameter, tank liquid level, pump nozzle diameter, pump elevation, and cooling coil diameter are all 1/10.85 of their dimensions in Tank 41H. The pump locations correspond to the proposed locations in Tank 41H by the SCIX program (Risers B5, B3, and B1). Previous testing showed that three Submersible Mixer Pumps (SMPs) will provide sufficient power to initially suspend MST in an SRS waste tank, and to resuspend MST that has settled in a waste tank at nominal 45 C for four weeks. The conclusions from this analysis are: (1) Three SMPs will be able to resuspend more than 99.9% of the MST and CST that has settled for four weeks at nominal 45 C. The testing shows the required pump discharge velocity is 84% of the maximum discharge velocity of the pump. (2) Three SMPs will be able to resuspend more than 99.9% of the MST, CST, and simulated sludge that has settled for four weeks at nominal 45 C. The testing shows the required pump discharge velocity is 82% of the maximum discharge velocity of the pump. (3) A contact time of 6-12 hours is needed for strontium sorption by MST in a jet mixed tank with cooling coils, which is consistent with bench-scale testing and actinide removal process (ARP) operation.

  3. Electron Debye scale Kelvin-Helmholtz instability: Electrostatic particle-in-cell simulations

    International Nuclear Information System (INIS)

    Lee, Sang-Yun; Lee, Ensang; Kim, Khan-Hyuk; Lee, Dong-Hun; Seon, Jongho; Jin, Ho

    2015-01-01

    In this paper, we investigated the electron Debye scale Kelvin-Helmholtz (KH) instability using two-dimensional electrostatic particle-in-cell simulations. We introduced a velocity shear layer with a thickness comparable to the electron Debye length and examined the generation of the KH instability. The KH instability occurs in a similar manner as observed in the KH instabilities in fluid or ion scales producing surface waves and rolled-up vortices. The strength and growth rate of the electron Debye scale KH instability is affected by the structure of the velocity shear layer. The strength depends on the magnitude of the velocity and the growth rate on the velocity gradient of the shear layer. However, the development of the electron Debye scale KH instability is mainly determined by the electric field generated by charge separation. Significant mixing of electrons occurs across the shear layer, and a fraction of electrons can penetrate deeply into the opposite side fairly far from the vortices across the shear layer

  4. Simulating flow around scaled model of a hypersonic vehicle in wind tunnel

    Science.gov (United States)

    Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.

    2016-11-01

    A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.

  5. Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations

    Science.gov (United States)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut

    2017-01-01

    We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .

  6. Hydraulic and thermal conduction phenomena in soils at the particle-scale: Towards realistic FEM simulations

    International Nuclear Information System (INIS)

    Narsilio, G A; Yun, T S; Kress, J; Evans, T M

    2010-01-01

    This paper summarizes a method to characterize conduction properties in soils at the particle-scale. The method set the bases for an alternative way to estimate conduction parameters such as thermal conductivity and hydraulic conductivity, with the potential application to hard-to-obtain samples, where traditional experimental testing on large enough specimens becomes much more expensive. The technique is exemplified using 3D synthetic grain packings generated with discrete element methods, from which 3D granular images are constructed. Images are then imported into the finite element analyses to solve the corresponding governing partial differential equations of hydraulic and thermal conduction. High performance computing is implemented to meet the demanding 3D numerical calculations of the complex geometrical domains. The effects of void ratio and inter-particle contacts in hydraulic and thermal conduction are explored. Laboratory measurements support the numerically obtained results and validate the viability of the new methods used herein. The integration of imaging with rigorous numerical simulations at the pore-scale also enables fundamental observation of particle-scale mechanisms of macro-scale manifestation.

  7. Versatile single-molecule multi-color excitation and detection fluorescence setup for studying biomolecular dynamics

    KAUST Repository

    Sobhy, M. A.; Elshenawy, M. M.; Takahashi, Masateru; Whitman, B. H.; Walter, N. G.; Hamdan, S. M.

    2011-01-01

    Single-molecule fluorescence imaging is at the forefront of tools applied to study biomolecular dynamics both in vitro and in vivo. The ability of the single-molecule fluorescence microscope to conduct simultaneous multi-color excitation

  8. Large scale statistics for computational verification of grain growth simulations with experiments

    International Nuclear Information System (INIS)

    Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.

    2002-01-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the Electron Backscattered Diffraction (EBSD) measurements. Using the same technique, we obtained 5170-grain data from an Aluminum-film (120 (micro)m thick) with a columnar grain structure. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 C. Characterization of the structures and properties of grain boundary networks (GBN) to produce desirable microstructures is one of the fundamental problems in interface science. There is an ongoing research for the development of new experimental and analytical techniques in order to obtain and synthesize information related to GBN. The grain boundary energy and mobility data were characterized by Electron Backscattered Diffraction (EBSD) technique and Atomic Force Microscopy (AFM) observations (i.e., for ceramic MgO and for the metal Al). Grain boundary energies are extracted from triple junction (TJ) geometry considering the local equilibrium condition at TJ's. Relative boundary mobilities were also extracted from TJ's through a statistical/multiscale analysis. Additionally, there are recent theoretical developments of grain boundary evolution in microstructures. In this paper, a new technique for three-dimensional grain growth simulations was used to simulate interface migration

  9. Improving plot- and regional-scale crop models for simulating impacts of climate variability and extremes

    Science.gov (United States)

    Tao, F.; Rötter, R.

    2013-12-01

    Many studies on global climate report that climate variability is increasing with more frequent and intense extreme events1. There are quite large uncertainties from both the plot- and regional-scale models in simulating impacts of climate variability and extremes on crop development, growth and productivity2,3. One key to reducing the uncertainties is better exploitation of experimental data to eliminate crop model deficiencies and develop better algorithms that more adequately capture the impacts of extreme events, such as high temperature and drought, on crop performance4,5. In the present study, in a first step, the inter-annual variability in wheat yield and climate from 1971 to 2012 in Finland was investigated. Using statistical approaches the impacts of climate variability and extremes on wheat growth and productivity were quantified. In a second step, a plot-scale model, WOFOST6, and a regional-scale crop model, MCWLA7, were calibrated and validated, and applied to simulate wheat growth and yield variability from 1971-2012. Next, the estimated impacts of high temperature stress, cold damage, and drought stress on crop growth and productivity based on the statistical approaches, and on crop simulation models WOFOST and MCWLA were compared. Then, the impact mechanisms of climate extremes on crop growth and productivity in the WOFOST model and MCWLA model were identified, and subsequently, the various algorithm and impact functions were fitted against the long-term crop trial data. Finally, the impact mechanisms, algorithms and functions in WOFOST model and MCWLA model were improved to better simulate the impacts of climate variability and extremes, particularly high temperature stress, cold damage and drought stress for location-specific and large area climate impact assessments. Our studies provide a good example of how to improve, in parallel, the plot- and regional-scale models for simulating impacts of climate variability and extremes, as needed for

  10. Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales

    Data.gov (United States)

    National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...

  11. Production of lightning NOx and its vertical distribution calculated from three-dimensional cloud-scale chemical transport model simulations

    KAUST Repository

    Ott, Lesley E.; Pickering, Kenneth E.; Stenchikov, Georgiy L.; Allen, Dale J.; DeCaria, Alex J.; Ridley, Brian; Lin, Ruei-Fong; Lang, Stephen; Tao, Wei-Kuo

    2010-01-01

    A three-dimensional (3-D) cloud-scale chemical transport model that includes a parameterized source of lightning NOx on the basis of observed flash rates has been used to simulate six midlatitude and subtropical thunderstorms observed during four

  12. Parallelization of a beam dynamics code and first large scale radio frequency quadrupole simulations

    Directory of Open Access Journals (Sweden)

    J. Xu

    2007-01-01

    Full Text Available The design and operation support of hadron (proton and heavy-ion linear accelerators require substantial use of beam dynamics simulation tools. The beam dynamics code TRACK has been originally developed at Argonne National Laboratory (ANL to fulfill the special requirements of the rare isotope accelerator (RIA accelerator systems. From the beginning, the code has been developed to make it useful in the three stages of a linear accelerator project, namely, the design, commissioning, and operation of the machine. To realize this concept, the code has unique features such as end-to-end simulations from the ion source to the final beam destination and automatic procedures for tuning of a multiple charge state heavy-ion beam. The TRACK code has become a general beam dynamics code for hadron linacs and has found wide applications worldwide. Until recently, the code has remained serial except for a simple parallelization used for the simulation of multiple seeds to study the machine errors. To speed up computation, the TRACK Poisson solver has been parallelized. This paper discusses different parallel models for solving the Poisson equation with the primary goal to extend the scalability of the code onto 1024 and more processors of the new generation of supercomputers known as BlueGene (BG/L. Domain decomposition techniques have been adapted and incorporated into the parallel version of the TRACK code. To demonstrate the new capabilities of the parallelized TRACK code, the dynamics of a 45 mA proton beam represented by 10^{8} particles has been simulated through the 325 MHz radio frequency quadrupole and initial accelerator section of the proposed FNAL proton driver. The results show the benefits and advantages of large-scale parallel computing in beam dynamics simulations.

  13. Simulations of dislocations dynamics at a mesoscopic scale: a study of plastic flow

    International Nuclear Information System (INIS)

    Devincre, Benoit

    1993-01-01

    This work is concerned with the numerical modelling of the plastic flow of crystalline materials. A new simulation technique is proposed to simulate dislocation dynamics in two and three dimensions, in an isotropic elastic continuum. The space and time scales used (≅10 -6 m and 10 -9 s) allow to take into account the elementary properties of dislocations, their short and long range interactions, their collective properties as well as the slip geometry. This original method is able to reproduce the inherent heterogeneity of plastic flow, the self-organization properties of the dislocation microstructures and the corresponding mechanical properties. In two dimensions, the simulations of cyclic deformation lead to the formation of periodic arrays of dipolar dislocation walls. These configurations are examined and discussed. A phenomenological model is proposed which predicts their characteristic wavelength as a function of the applied stress and dislocation density. A striking resemblance between the simulated behaviour and experimental data is emphasized. In three dimensions, the simulations are more realistic and can directly be compared with the experimental data. They are, however, restricted to small plastic strains, of the order of 10 -3 . The properties examined and discussed are concerned with the forest model, the internal stress, which is shown to contribute to about 20 pc of the flow stress and the mechanisms of strain hardening in relation with the models of Friedel-Saada and Kocks. The investigation of the dislocation microstructures focusses on two essential ingredients for the occurrence of self-organization, the internal stress and the intersections of non coplanar dislocations. These results suggest that, to understand the strain hardening properties as well as the formation of dislocation cells during multiple slip, one must take into account the influence of local internal stresses and cross-slip on the mechanisms of areal glide. (author) [fr

  14. Challenges in analysing and visualizing large-scale molecular dynamics simulations: domain and defect formation in lung surfactant monolayers

    International Nuclear Information System (INIS)

    Mendez-Villuendas, E; Baoukina, S; Tieleman, D P

    2012-01-01

    Molecular dynamics simulations have rapidly grown in size and complexity, as computers have become more powerful and molecular dynamics software more efficient. Using coarse-grained models like MARTINI system sizes of the order of 50 nm × 50 nm × 50 nm can be simulated on commodity clusters on microsecond time scales. For simulations of biological membranes and monolayers mimicking lung surfactant this enables large-scale transformation and complex mixtures of lipids and proteins. Here we use a simulation of a monolayer with three phospholipid components, cholesterol, lung surfactant proteins, water, and ions on a ten microsecond time scale to illustrate some current challenges in analysis. In the simulation, phase separation occurs followed by formation of a bilayer fold in which lipids and lung surfactant protein form a highly curved structure in the aqueous phase. We use Voronoi analysis to obtain detailed physical properties of the different components and phases, and calculate local mean and Gaussian curvatures of the bilayer fold.

  15. hPDB – Haskell library for processing atomic biomolecular structures in protein data bank format

    OpenAIRE

    Gajda, Michał Jan

    2013-01-01

    Background Protein DataBank file format is used for the majority of biomolecular data available today. Haskell is a lazy functional language that enjoys a high-level class-based type system, a growing collection of useful libraries and a reputation for efficiency. Findings I present a fast library for processing biomolecular data in the Protein Data Bank format. I present benchmarks indicating that this library is faster than other frequently used Protein Data Bank parsing programs. The propo...

  16. A compact hard X-ray source for medical imaging and biomolecular studies

    International Nuclear Information System (INIS)

    Cline, D.B.; Green, M.A.; Kolonko, J.

    1995-01-01

    There are a large number of synchrotron light sources in the world. However, these sources are designed for physics, chemistry, and engineering studies. To our knowledge, none have been optimized for either medical imaging or biomolecular studies. There are special needs for these applications. We present here a preliminary design of a very compact source, small enough for a hospital or a biomolecular laboratory, that is suitable for these applications. (orig.)

  17. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers

  18. Comparison of contact stresses of the test tyres used by the one third scale model mobile load simulator (MMLS3) and the full-scale test tyres of the Heavy Vehicle Simulator (HVS) - a summary

    CSIR Research Space (South Africa)

    De Beer, Morris

    2007-07-01

    Full Text Available This paper summarises the results of a study in which the maximum vertical contact stressess of the one third scale test tyres of the Model Mobile Load Simulator (MMLS3) were compared with those measured for three types of full-scale test tyres...

  19. DEM GPU studies of industrial scale particle simulations for granular flow civil engineering applications

    Science.gov (United States)

    Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine

    2017-06-01

    The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.

  20. Simulating multi-scale oceanic processes around Taiwan on unstructured grids

    Science.gov (United States)

    Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai

    2017-11-01

    We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.

  1. Simulating flow in karst aquifers at laboratory and sub-regional scales using MODFLOW-CFP

    Science.gov (United States)

    Gallegos, Josue Jacob; Hu, Bill X.; Davis, Hal

    2013-12-01

    Groundwater flow in a well-developed karst aquifer dominantly occurs through bedding planes, fractures, conduits, and caves created by and/or enlarged by dissolution. Conventional groundwater modeling methods assume that groundwater flow is described by Darcian principles where primary porosity (i.e. matrix porosity) and laminar flow are dominant. However, in well-developed karst aquifers, the assumption of Darcian flow can be questionable. While Darcian flow generally occurs in the matrix portion of the karst aquifer, flow through conduits can be non-laminar where the relation between specific discharge and hydraulic gradient is non-linear. MODFLOW-CFP is a relatively new modeling program that accounts for non-laminar and laminar flow in pipes, like karst caves, within an aquifer. In this study, results from MODFLOW-CFP are compared to those from MODFLOW-2000/2005, a numerical code based on Darcy's law, to evaluate the accuracy that CFP can achieve when modeling flows in karst aquifers at laboratory and sub-regional (Woodville Karst Plain, Florida, USA) scales. In comparison with laboratory experiments, simulation results by MODFLOW-CFP are more accurate than MODFLOW 2005. At the sub-regional scale, MODFLOW-CFP was more accurate than MODFLOW-2000 for simulating field measurements of peak flow at one spring and total discharges at two springs for an observed storm event.

  2. A detailed model for simulation of catchment scale subsurface hydrologic processes

    Science.gov (United States)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  3. Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems

    Science.gov (United States)

    Omelchenko, Yuri; Karimabadi, Homayoun

    2005-10-01

    Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.

  4. Multi-scale modelling and numerical simulation of electronic kinetic transport

    International Nuclear Information System (INIS)

    Duclous, R.

    2009-11-01

    This research thesis which is at the interface between numerical analysis, plasma physics and applied mathematics, deals with the kinetic modelling and numerical simulations of the electron energy transport and deposition in laser-produced plasmas, having in view the processes of fuel assembly to temperature and density conditions necessary to ignite fusion reactions. After a brief review of the processes at play in the collisional kinetic theory of plasmas, with a focus on basic models and methods to implement, couple and validate them, the author focuses on the collective aspect related to the free-streaming electron transport equation in the non-relativistic limit as well as in the relativistic regime. He discusses the numerical development and analysis of the scheme for the Vlasov-Maxwell system, and the selection of a validation procedure and numerical tests. Then, he investigates more specific aspects of the collective transport: the multi-specie transport, submitted to phase-space discontinuities. Dealing with the multi-scale physics of electron transport with collision source terms, he validates the accuracy of a fast Monte Carlo multi-grid solver for the Fokker-Planck-Landau electron-electron collision operator. He reports realistic simulations for the kinetic electron transport in the frame of the shock ignition scheme, the development and validation of a reduced electron transport angular model. He finally explores the relative importance of the processes involving electron-electron collisions at high energy by means a multi-scale reduced model with relativistic Boltzmann terms

  5. Testing of Large-Scale ICV Glasses with Hanford LAW Simulant

    Energy Technology Data Exchange (ETDEWEB)

    Hrma, Pavel R.; Kim, Dong-Sang; Vienna, John D.; Matyas, Josef; Smith, Donald E.; Schweiger, Michael J.; Yeager, John D.

    2005-03-01

    Preliminary glass compositions for immobilizing Hanford low-activity waste (LAW) by the in-container vitrification (ICV) process were initially fabricated at crucible- and engineering-scale, including simulants and actual (radioactive) LAW. Glasses were characterized for vapor hydration test (VHT) and product consistency test (PCT) responses and crystallinity (both quenched and slow-cooled samples). Selected glasses were tested for toxicity characteristic leach procedure (TCLP) responses, viscosity, and electrical conductivity. This testing showed that glasses with LAW loading of 20 mass% can be made readily and meet all product constraints by a far margin. Glasses with over 22 mass% Na2O can be made to meet all other product quality and process constraints. Large-scale testing was performed at the AMEC, Geomelt Division facility in Richland. Three tests were conducted using simulated LAW with increasing loadings of 12, 17, and 20 mass% Na2O. Glass samples were taken from the test products in a manner to represent the full expected range of product performance. These samples were characterized for composition, density, crystalline and non-crystalline phase assemblage, and durability using the VHT, PCT, and TCLP tests. The results, presented in this report, show that the AMEC ICV product with meets all waste form requirements with a large margin. These results provide strong evidence that the Hanford LAW can be successfully vitrified by the ICV technology and can meet all the constraints related to product quality. The economic feasibility of the ICV technology can be further enhanced by subsequent optimization.

  6. Large-scale conformational changes of Trypanosoma cruzi proline racemase predicted by accelerated molecular dynamics simulation.

    Directory of Open Access Journals (Sweden)

    César Augusto F de Oliveira

    2011-10-01

    Full Text Available Chagas' disease, caused by the protozoan parasite Trypanosoma cruzi (T. cruzi, is a life-threatening illness affecting 11-18 million people. Currently available treatments are limited, with unacceptable efficacy and safety profiles. Recent studies have revealed an essential T. cruzi proline racemase enzyme (TcPR as an attractive candidate for improved chemotherapeutic intervention. Conformational changes associated with substrate binding to TcPR are believed to expose critical residues that elicit a host mitogenic B-cell response, a process contributing to parasite persistence and immune system evasion. Characterization of the conformational states of TcPR requires access to long-time-scale motions that are currently inaccessible by standard molecular dynamics simulations. Here we describe advanced accelerated molecular dynamics that extend the effective simulation time and capture large-scale motions of functional relevance. Conservation and fragment mapping analyses identified potential conformational epitopes located in the vicinity of newly identified transient binding pockets. The newly identified open TcPR conformations revealed by this study along with knowledge of the closed to open interconversion mechanism advances our understanding of TcPR function. The results and the strategy adopted in this work constitute an important step toward the rationalization of the molecular basis behind the mitogenic B-cell response of TcPR and provide new insights for future structure-based drug discovery.

  7. Pore scale simulations for the extension of the Darcy-Forchheimer law to shear thinning fluids

    Science.gov (United States)

    Tosco, Tiziana; Marchisio, Daniele; Lince, Federica; Boccardo, Gianluca; Sethi, Rajandrea

    2014-05-01

    Flow of non-Newtonian fluids through porous media at high Reynolds numbers is often encountered in chemical, pharmaceutical and food as well as petroleum and groundwater engineering and in many other industrial applications (1 - 2). In particular, the use of shear thinning polymeric solutions has been recently proposed to improve colloidal stability of micro- and nanoscale zerovalent iron particles (MZVI and NZVI) for groundwater remediation. In all abovementioned applications, it is of paramount importance to correctly predict the pressure drop resulting from non-Newtonian fluid flow through the porous medium. For small Reynolds numbers, usually up to 1, typical of laboratory column tests, the extended Darcy law is known to be applicable also to non Newtonian fluids, provided that all non-Newtonian effects are lumped together into a proper viscosity parameter (1,3). For higher Reynolds numbers (eg. close to the injection wells) non linearities between pressure drop and flow rate arise, and the Darcy-Forchheimer law holds for Newtonian fluids, while for non-Newtonian fluids, it has been demonstrated that, at least for simple rheological models (eg. power law fluids) a generalized Forchheimer law can be applied, even if the determination of the flow parameters (permeability K, inertial coefficient β, and equivalent viscosity) is not straightforward. This work (co-funded by European Union project AQUAREHAB FP7 - Grant Agreement Nr. 226565) aims at proposing an extended formulation of the Darcy-Forchheimer law also for shear-thinning fluids, and validating it against results of pore-scale simulations via computational fluid dynamics (4). Flow simulations were performed using Fluent 12.0 on four different 2D porous domains for Newtonian and non-Newtonian fluids (Cross, Ellis and Carreau models). The micro-scale flow simulation results are analyzed in terms of 'macroscale' pressure drop between inlet and outlet of the model domain as a function of flow rate. The

  8. Linearly scaling and almost Hamiltonian dielectric continuum molecular dynamics simulations through fast multipole expansions

    Energy Technology Data Exchange (ETDEWEB)

    Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2015-11-14

    Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.

  9. Simulation of water-energy fluxes through small-scale reservoir systems under limited data availability

    Science.gov (United States)

    Papoulakos, Konstantinos; Pollakis, Giorgos; Moustakis, Yiannis; Markopoulos, Apostolis; Iliopoulou, Theano; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2017-04-01

    Small islands are regarded as promising areas for developing hybrid water-energy systems that combine multiple sources of renewable energy with pumped-storage facilities. Essential element of such systems is the water storage component (reservoir), which implements both flow and energy regulations. Apparently, the representation of the overall water-energy management problem requires the simulation of the operation of the reservoir system, which in turn requires a faithful estimation of water inflows and demands of water and energy. Yet, in small-scale reservoir systems, this task in far from straightforward, since both the availability and accuracy of associated information is generally very poor. For, in contrast to large-scale reservoir systems, for which it is quite easy to find systematic and reliable hydrological data, in the case of small systems such data may be minor or even totally missing. The stochastic approach is the unique means to account for input data uncertainties within the combined water-energy management problem. Using as example the Livadi reservoir, which is the pumped storage component of the small Aegean island of Astypalaia, Greece, we provide a simulation framework, comprising: (a) a stochastic model for generating synthetic rainfall and temperature time series; (b) a stochastic rainfall-runoff model, whose parameters cannot be inferred through calibration and, thus, they are represented as correlated random variables; (c) a stochastic model for estimating water supply and irrigation demands, based on simulated temperature and soil moisture, and (d) a daily operation model of the reservoir system, providing stochastic forecasts of water and energy outflows. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students

  10. Post-Newtonian Dynamical Modeling of Supermassive Black Holes in Galactic-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rantala, Antti; Pihajoki, Pauli; Johansson, Peter H.; Lahén, Natalia; Sawala, Till [Department of Physics, University of Helsinki, Gustaf Hällströmin katu 2a (Finland); Naab, Thorsten, E-mail: antti.rantala@helsinki.fi [Max-Planck-Insitut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748, Garching (Germany)

    2017-05-01

    We present KETJU, a new extension of the widely used smoothed particle hydrodynamics simulation code GADGET-3. The key feature of the code is the inclusion of algorithmically regularized regions around every supermassive black hole (SMBH). This allows for simultaneously following global galactic-scale dynamical and astrophysical processes, while solving the dynamics of SMBHs, SMBH binaries, and surrounding stellar systems at subparsec scales. The KETJU code includes post-Newtonian terms in the equations of motions of the SMBHs, which enables a new SMBH merger criterion based on the gravitational wave coalescence timescale, pushing the merger separation of SMBHs down to ∼0.005 pc. We test the performance of our code by comparison to NBODY7 and rVINE. We set up dynamically stable multicomponent merger progenitor galaxies to study the SMBH binary evolution during galaxy mergers. In our simulation sample the SMBH binaries do not suffer from the final-parsec problem, which we attribute to the nonspherical shape of the merger remnants. For bulge-only models, the hardening rate decreases with increasing resolution, whereas for models that in addition include massive dark matter halos, the SMBH binary hardening rate becomes practically independent of the mass resolution of the stellar bulge. The SMBHs coalesce on average 200 Myr after the formation of the SMBH binary. However, small differences in the initial SMBH binary eccentricities can result in large differences in the SMBH coalescence times. Finally, we discuss the future prospects of KETJU, which allows for a straightforward inclusion of gas physics in the simulations.

  11. Finger Thickening during Extra-Heavy Oil Waterflooding: Simulation and Interpretation Using Pore-Scale Modelling.

    Directory of Open Access Journals (Sweden)

    Mohamed Regaieg

    Full Text Available Although thermal methods have been popular and successfully applied in heavy oil recovery, they are often found to be uneconomic or impractical. Therefore, alternative production protocols are being actively pursued and interesting options include water injection and polymer flooding. Indeed, such techniques have been successfully tested in recent laboratory investigations, where X-ray scans performed on homogeneous rock slabs during water flooding experiments have shown evidence of an interesting new phenomenon-post-breakthrough, highly dendritic water fingers have been observed to thicken and coalesce, forming braided water channels that improve sweep efficiency. However, these experimental studies involve displacement mechanisms that are still poorly understood, and so the optimization of this process for eventual field application is still somewhat problematic. Ideally, a combination of two-phase flow experiments and simulations should be put in place to help understand this process more fully. To this end, a fully dynamic network model is described and used to investigate finger thickening during water flooding of extra-heavy oils. The displacement physics has been implemented at the pore scale and this is followed by a successful benchmarking exercise of the numerical simulations against the groundbreaking micromodel experiments reported by Lenormand and co-workers in the 1980s. A range of slab-scale simulations has also been carried out and compared with the corresponding experimental observations. We show that the model is able to replicate finger architectures similar to those observed in the experiments and go on to reproduce and interpret, for the first time to our knowledge, finger thickening following water breakthrough. We note that this phenomenon has been observed here in homogeneous (i.e. un-fractured media: the presence of fractures could be expected to exacerbate such fingering still further. Finally, we examine the impact of

  12. A small-scale dynamo in feedback-dominated galaxies - III. Cosmological simulations

    Science.gov (United States)

    Rieder, Michael; Teyssier, Romain

    2017-12-01

    Magnetic fields are widely observed in the Universe in virtually all astrophysical objects, from individual stars to entire galaxies, even in the intergalactic medium, but their specific genesis has long been debated. Due to the development of more realistic models of galaxy formation, viable scenarios are emerging to explain cosmic magnetism, thanks to both deeper observations and more efficient and accurate computer simulations. We present here a new cosmological high-resolution zoom-in magnetohydrodynamic (MHD) simulation, using the adaptive mesh refinement technique, of a dwarf galaxy with an initially weak and uniform magnetic seed field that is amplified by a small-scale dynamo (SSD) driven by supernova-induced turbulence. As first structures form from the gravitational collapse of small density fluctuations, the frozen-in magnetic field separates from the cosmic expansion and grows through compression. In a second step, star formation sets in and establishes a strong galactic fountain, self-regulated by supernova explosions. Inside the galaxy, the interstellar medium becomes highly turbulent, dominated by strong supersonic shocks, as demonstrated by the spectral analysis of the gas kinetic energy. In this turbulent environment, the magnetic field is quickly amplified via a SSD process and is finally carried out into the circumgalactic medium by a galactic wind. This realistic cosmological simulation explains how initially weak magnetic seed fields can be amplified quickly in early, feedback-dominated galaxies, and predicts, as a consequence of the SSD process, that high-redshift magnetic fields are likely to be dominated by their small-scale components.

  13. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Science.gov (United States)

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  14. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time sa