WorldWideScience

Sample records for large-scale simulations performed

  1. Simulation of buoyancy induced gas mixing tests performed in a large scale containment facility using GOTHIC code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Z.; Chin, Y.S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    This paper compares containment thermal-hydraulics simulations performed using GOTHIC against a past test set of large scale buoyancy induced helium-air-steam mixing experiments that had been performed at the AECL's Chalk River Laboratories. A number of typical post-accident containment phenomena, including thermal/gas stratification, natural convection, cool air entrainment, steam condensation on concrete walls and active local air cooler, were covered. The results provide useful insights into hydrogen gas mixing behaviour following a loss-of-coolant accident and demonstrate GOTHIC's capability in simulating these phenomena. (author)

  2. Simulation of buoyancy induced gas mixing tests performed in a large scale containment facility using GOTHIC code

    International Nuclear Information System (INIS)

    Liang, Z.; Chin, Y.S.

    2014-01-01

    This paper compares containment thermal-hydraulics simulations performed using GOTHIC against a past test set of large scale buoyancy induced helium-air-steam mixing experiments that had been performed at the AECL's Chalk River Laboratories. A number of typical post-accident containment phenomena, including thermal/gas stratification, natural convection, cool air entrainment, steam condensation on concrete walls and active local air cooler, were covered. The results provide useful insights into hydrogen gas mixing behaviour following a loss-of-coolant accident and demonstrate GOTHIC's capability in simulating these phenomena. (author)

  3. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  4. A concurrent visualization system for large-scale unsteady simulations. Parallel vector performance on an NEC SX-4

    International Nuclear Information System (INIS)

    Takei, Toshifumi; Doi, Shun; Matsumoto, Hideki; Muramatsu, Kazuhiro

    2000-01-01

    We have developed a concurrent visualization system RVSLIB (Real-time Visual Simulation Library). This paper shows the effectiveness of the system when it is applied to large-scale unsteady simulations, for which the conventional post-processing approach may no longer work, on high-performance parallel vector supercomputers. The system performs almost all of the visualization tasks on a computation server and uses compressed visualized image data for efficient communication between the server and the user terminal. We have introduced several techniques, including vectorization and parallelization, into the system to minimize the computational costs of the visualization tools. The performance of RVSLIB was evaluated by using an actual CFD code on an NEC SX-4. The computational time increase due to the concurrent visualization was at most 3% for a smaller (1.6 million) grid and less than 1% for a larger (6.2 million) one. (author)

  5. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  6. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  7. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  8. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  9. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  10. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  11. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  12. Hybrid simulation methods to perform grid integration studies for large scale offshore wind power connected through VSC-HVDC

    NARCIS (Netherlands)

    Meer, van der A.A.; Hendriks, R.L.; Gibescu, M.; Ferreira, J.A.; Kling, W.L.

    2011-01-01

    This paper deals with the inclusion of VSC-HVdc transmission schemes into stability-type simulations by hybrid methods. These methods allow selected parts of the network to be simulated in detail by including electro-magnetic behaviour of devices and network elements whereas the remainder of the

  13. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  14. Performance regression manager for large scale systems

    Science.gov (United States)

    Faraj, Daniel A.

    2017-08-01

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.

  15. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  16. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n

    2010-01-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one

  17. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  18. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  19. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  20. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  1. Simulation of fatigue crack growth under large scale yielding conditions

    Science.gov (United States)

    Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann

    2010-07-01

    A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.

  2. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  3. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  4. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  5. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  6. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  7. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  8. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui; Hsu, Shu-wei; McNamara, Ann; Keyser, John

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  9. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  10. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  11. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  12. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)

    2017-12-04

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  13. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  14. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  15. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  16. Large Scale Monte Carlo Simulation of Neutrino Interactions Using the Open Science Grid and Commercial Clouds

    International Nuclear Information System (INIS)

    Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.

    2015-01-01

    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)

  17. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  18. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  19. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    Energy Technology Data Exchange (ETDEWEB)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  20. Manufacturing Process Simulation of Large-Scale Cryotanks

    Science.gov (United States)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  1. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  2. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  3. The effects of large scale processing on caesium leaching from cemented simulant sodium nitrate waste

    International Nuclear Information System (INIS)

    Lee, D.J.; Brown, D.J.

    1982-01-01

    The effects of large scale processing on the properties of cemented simulant sodium nitrate waste have been investigated. Leach tests have been performed on full-size drums, cores and laboratory samples of cement formulations containing Ordinary Portland Cement (OPC), Sulphate Resisting Portland Cement (SRPC) and a blended cement (90% ground granulated blast furnace slag/10% OPC). In addition, development of the cement hydration exotherms with time and the temperature distribution in 220 dm 3 samples have been followed. (author)

  4. Large-scale micromagnetics simulations with dipolar interaction using all-to-all communications

    Directory of Open Access Journals (Sweden)

    Hiroshi Tsukahara

    2016-05-01

    Full Text Available We implement on our micromagnetics simulator low-complexity parallel fast-Fourier-transform algorithms, which reduces the frequency of all-to-all communications from six to two times. Almost all the computation time of micromagnetics simulation is taken up by the calculation of the magnetostatic field which can be calculated using the fast Fourier transform method. The results show that the simulation time is decreased with good scalability, even if the micromagentics simulation is performed using 8192 physical cores. This high parallelization effect enables large-scale micromagentics simulation using over one billion to be performed. Because massively parallel computing is needed to simulate the magnetization dynamics of real permanent magnets composed of many micron-sized grains, it is expected that our simulator reveals how magnetization dynamics influences the coercivity of the permanent magnet.

  5. Simulating large-scale spiking neuronal networks with NEST

    OpenAIRE

    Schücker, Jannis; Eppler, Jochen Martin

    2014-01-01

    The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...

  6. Numerical simulations of a large scale oxy-coal burner

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Taeyoung [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group; Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Park, Sanghyun; Ryu, Changkook [Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Yang, Won [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group

    2013-07-01

    Oxy-coal combustion is one of promising carbon dioxide capture and storage (CCS) technologies that uses oxygen and recirculated CO{sub 2} as an oxidizer instead of air. Due to difference in physical properties between CO{sub 2} and N{sub 2}, the oxy-coal combustion requires development of burner and boiler based on fundamental understanding of the flame shape, temperature, radiation and heat flux. For design of a new oxy-coal combustion system, computational fluid dynamics (CFD) is an essential tool to evaluate detailed combustion characteristics and supplement experimental results. In this study, CFD analysis was performed to understand the combustion characteristics inside a tangential vane swirl type 30 MW coal burner for air-mode and oxy-mode operations. In oxy-mode operations, various compositions of primary and secondary oxidizers were assessed which depended on the recirculation ratio of flue gas. For the simulations, devolatilization of coal and char burnout by O{sub 2}, CO{sub 2} and H{sub 2}O were predicted with a Lagrangian particle tracking method considering size distribution of pulverized coal and turbulent dispersion. The radiative heat transfer was solved by employing the discrete ordinate method with the weighted sum of gray gases model (WSGGM) optimized for oxy-coal combustion. In the simulation results for oxy-model operation, the reduced swirl strength of secondary oxidizer increased the flame length due to lower specific volume of CO{sub 2} than N{sub 2}. The flame length was also sensitive to the flow rate of primary oxidizer. The oxidizer without N{sub 2} that reduces thermal NO{sub x} formation makes the NO{sub x} lower in oxy-mode than air-mode. The predicted results showed similar trends with measured temperature profiles for various oxidizer compositions. Further numerical investigations are required to improve the burner design combined with more detailed experimental results.

  7. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  8. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  9. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu; Duan, Benchun; Taylor, Valerie

    2011-01-01

    , such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular

  10. Simulation test of PIUS-type reactor with large scale experimental apparatus

    International Nuclear Information System (INIS)

    Tamaki, M.; Tsuji, Y.; Ito, T.; Tasaka, K.; Kukita, Yutaka

    1995-01-01

    A large scale experimental apparatus for simulating the PIUS-type reactor has been constructed keeping the volumetric scaling ratio to the realistic reactor model. Fundamental experiments such as a steady state operation and a pump trip simulation were performed. Experimental results were compared with those obtained by the small scale apparatus in JAERI. We have already reported the effectiveness of the feedback control for the primary loop pump speed (PI control) for the stable operation. In this paper this feedback system is modified and the PID control is introduced. This new system worked well for the operation of the PIUS-type reactor even in a rapid transient condition. (author)

  11. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  12. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  13. The cavitation erosion of ultrasonic sonotrode during large-scale metallic casting: Experiment and simulation.

    Science.gov (United States)

    Tian, Yang; Liu, Zhilin; Li, Xiaoqian; Zhang, Lihua; Li, Ruiqing; Jiang, Ripeng; Dong, Fang

    2018-05-01

    Ultrasonic sonotrodes play an essential role in transmitting power ultrasound into the large-scale metallic casting. However, cavitation erosion considerably impairs the in-service performance of ultrasonic sonotrodes, leading to marginal microstructural refinement. In this work, the cavitation erosion behaviour of ultrasonic sonotrodes in large-scale castings was explored using the industry-level experiments of Al alloy cylindrical ingots (i.e. 630 mm in diameter and 6000 mm in length). When introducing power ultrasound, severe cavitation erosion was found to reproducibly occur at some specific positions on ultrasonic sonotrodes. However, there is no cavitation erosion present on the ultrasonic sonotrodes that were not driven by electric generator. Vibratory examination showed cavitation erosion depended on the vibration state of ultrasonic sonotrodes. Moreover, a finite element (FE) model was developed to simulate the evolution and distribution of acoustic pressure in 3-D solidification volume. FE simulation results confirmed that significant dynamic interaction between sonotrodes and melts only happened at some specific positions corresponding to severe cavitation erosion. This work will allow for developing more advanced ultrasonic sonotrodes with better cavitation erosion-resistance, in particular for large-scale castings, from the perspectives of ultrasonic physics and mechanical design. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Interactive and Large Scale Supercomputing Simulations in Nonlinear Optics

    National Research Council Canada - National Science Library

    Moloney, J

    2001-01-01

    .... The upgrade consisted in purchasing 8 of the newest generation of 400 MHz CPUs, converting one of ONYX2 racks into a fully loaded 16-processor Origin 2000/2400 system and moving both high performance...

  15. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  16. Symplectic integrators for large scale molecular dynamics simulations: A comparison of several explicit methods

    International Nuclear Information System (INIS)

    Gray, S.K.; Noid, D.W.; Sumpter, B.G.

    1994-01-01

    We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

  17. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  18. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  19. Anatomically detailed and large-scale simulations studying synapse loss and synchrony using NeuroBox

    Directory of Open Access Journals (Sweden)

    Markus eBreit

    2016-02-01

    Full Text Available The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic or reconstruction to the simulation platform UG 4 (which harbors a neuroscientific portfolio and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g. new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations.

  20. Use of a large-scale rainfall simulator reveals novel insights into stemflow generation

    Science.gov (United States)

    Levia, D. F., Jr.; Iida, S. I.; Nanko, K.; Sun, X.; Shinohara, Y.; Sakai, N.

    2017-12-01

    Detailed knowledge of stemflow generation and its effects on both hydrological and biogoechemical cycling is important to achieve a holistic understanding of forest ecosystems. Field studies and a smaller set of experiments performed under laboratory conditions have increased our process-based knowledge of stemflow production. Building upon these earlier works, a large-scale rainfall simulator was employed to deepen our understanding of stemflow generation processes. The use of the large-scale rainfall simulator provides a unique opportunity to examine a range of rainfall intensities under constant conditions that are difficult under natural conditions due to the variable nature of rainfall intensities in the field. Stemflow generation and production was examined for three species- Cryptomeria japonica D. Don (Japanese cedar), Chamaecyparis obtusa (Siebold & Zucc.) Endl. (Japanese cypress), Zelkova serrata Thunb. (Japanese zelkova)- under both leafed and leafless conditions at several different rainfall intensities (15, 20, 30, 40, 50, and 100 mm h-1) using a large-scale rainfall simulator in National Research Institute for Earth Science and Disaster Resilience (Tsukuba, Japan). Stemflow production and rates and funneling ratios were examined in relation to both rainfall intensity and canopy structure. Preliminary results indicate a dynamic and complex response of the funneling ratios of individual trees to different rainfall intensities among the species examined. This is partly the result of different canopy structures, hydrophobicity of vegetative surfaces, and differential wet-up processes across species and rainfall intensities. This presentation delves into these differences and attempts to distill them into generalizable patterns, which can advance our theories of stemflow generation processes and ultimately permit better stewardship of forest resources. ________________ Funding note: This research was supported by JSPS Invitation Fellowship for Research in

  1. Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations

    Science.gov (United States)

    Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang

    2017-09-01

    Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.

  2. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  3. Ownership and firm performance after large-scale privatization

    Czech Academy of Sciences Publication Activity Database

    Kočenda, Evžen; Švejnar, Jan

    -, č. 4143 (2003), s. 1-36 ISSN 0265-8003 Institutional research plan: CEZ:AV0Z7085904 Keywords : industrial organization * ownership * performance and privatization Subject RIV: AH - Economics www.cepr.org/pubs/dps/DP4143.asp

  4. Performance of mushroom fruiting for large scale commercial production

    International Nuclear Information System (INIS)

    Mat Rosol Awang; Rosnani Abdul Rashid; Hassan Hamdani Mutaat; Mohd Meswan Maskom

    2012-01-01

    The paper described the determination of mushroom fruiting yield, which is vital to economics of mushroom production. Consistency in mushroom yields enabling an estimation to be made for revenues and hence profitability could be predicted. It has been reported by many growers, there are a large variation in mushroom yields over different times of production. To assess such claims we have run four batches of mushroom fruiting and the performance fruiting body productions are presented. (author)

  5. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  6. Fault Transient Analysis and Protection Performance Evaluation within a Large-scale PV Power Plant

    Directory of Open Access Journals (Sweden)

    Wen Jinghua

    2016-01-01

    Full Text Available In this paper, a short-circuit test within a large-scale PV power plant with a total capacity of 850MWp is discussed. The fault currents supplied by the PV generation units are presented and analysed. According to the fault behaviour, the existing protection coordination principles with the plant are considered and their performances are evaluated. Moreover, these protections are examined in simulation platform under different operating situations. A simple measure with communication system is proposed to deal with the foreseeable problem about the current protection scheme in the PV power plant.

  7. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  8. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    International Nuclear Information System (INIS)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-01-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  9. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis.

    Science.gov (United States)

    Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju

    2010-07-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  10. Performance of automatic generation control mechanisms with large-scale wind power

    Energy Technology Data Exchange (ETDEWEB)

    Ummels, B.C.; Gibescu, M.; Paap, G.C. [Delft Univ. of Technology (Netherlands); Kling, W.L. [Transmission Operations Department of TenneT bv (Netherlands)

    2007-11-15

    The unpredictability and variability of wind power increasingly challenges real-time balancing of supply and demand in electric power systems. In liberalised markets, balancing is a responsibility jointly held by the TSO (real-time power balancing) and PRPs (energy programs). In this paper, a procedure is developed for the simulation of power system balancing and the assessment of AGC performance in the presence of large-scale wind power, using the Dutch control zone as a case study. The simulation results show that the performance of existing AGC-mechanisms is adequate for keeping ACE within acceptable bounds. At higher wind power penetrations, however, the capabilities of the generation mix are increasingly challenged and additional reserves are required at the same level. (au)

  11. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  12. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  13. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  14. Large-scale agent-based social simulation : A study on epidemic prediction and control

    NARCIS (Netherlands)

    Zhang, M.

    2016-01-01

    Large-scale agent-based social simulation is gradually proving to be a versatile methodological approach for studying human societies, which could make contributions from policy making in social science, to distributed artificial intelligence and agent technology in computer science, and to theory

  15. Development of large scale fusion plasma simulation and storage grid on JAERI Origin3800 system

    International Nuclear Information System (INIS)

    Idomura, Yasuhiro; Wang, Xin

    2003-01-01

    Under the Numerical EXperiment of Tokamak (NEXT) research project, various fluid, particle, and hybrid codes have been developed. These codes require a computational environment which consists of high performance processors, high speed storage system, and high speed parallelized visualization system. In this paper, the performance of the JAERI Origin3800 system is examined from a point of view of these requests. In the performance tests, it is shown that the representative particle and fluid codes operate with 15 - 40% of processing efficiency up to 512 processors. A storage area network (SAN) provides high speed parallel data transfer. A parallel visualization system enables order to magnitude faster visualization of a large scale simulation data compared with the previous graphic workstations. Accordingly, an extremely advanced simulation environment is realized on the JAERI Origin3800 system. Recently, development of a storage grid is underway in order to improve a computational environment of remote users. The storage grid is constructed by a combination of SAN and a wavelength division multiplexer (WDM). The preliminary tests show that compared with the existing data transfer methods, it enables dramatically high speed data transfer ∼100 Gbps over a wide area network. (author)

  16. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    Science.gov (United States)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  17. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  18. Establishment of DNS database in a turbulent channel flow by large-scale simulations

    OpenAIRE

    Abe, Hiroyuki; Kawamura, Hiroshi; 阿部 浩幸; 河村 洋

    2008-01-01

    In the present study, we establish statistical DNS (Direct Numerical Simulation) database in a turbulent channel flow with passive scalar transport at high Reynolds numbers and make the data available at our web site (http://murasun.me.noda.tus.ac.jp/turbulence/). The established database is reported together with the implementation of large-scale simulations, representative DNS results and results on turbulence model testing using the DNS data.

  19. Testing of Large-Scale ICV Glasses with Hanford LAW Simulant

    Energy Technology Data Exchange (ETDEWEB)

    Hrma, Pavel R.; Kim, Dong-Sang; Vienna, John D.; Matyas, Josef; Smith, Donald E.; Schweiger, Michael J.; Yeager, John D.

    2005-03-01

    Preliminary glass compositions for immobilizing Hanford low-activity waste (LAW) by the in-container vitrification (ICV) process were initially fabricated at crucible- and engineering-scale, including simulants and actual (radioactive) LAW. Glasses were characterized for vapor hydration test (VHT) and product consistency test (PCT) responses and crystallinity (both quenched and slow-cooled samples). Selected glasses were tested for toxicity characteristic leach procedure (TCLP) responses, viscosity, and electrical conductivity. This testing showed that glasses with LAW loading of 20 mass% can be made readily and meet all product constraints by a far margin. Glasses with over 22 mass% Na2O can be made to meet all other product quality and process constraints. Large-scale testing was performed at the AMEC, Geomelt Division facility in Richland. Three tests were conducted using simulated LAW with increasing loadings of 12, 17, and 20 mass% Na2O. Glass samples were taken from the test products in a manner to represent the full expected range of product performance. These samples were characterized for composition, density, crystalline and non-crystalline phase assemblage, and durability using the VHT, PCT, and TCLP tests. The results, presented in this report, show that the AMEC ICV product with meets all waste form requirements with a large margin. These results provide strong evidence that the Hanford LAW can be successfully vitrified by the ICV technology and can meet all the constraints related to product quality. The economic feasibility of the ICV technology can be further enhanced by subsequent optimization.

  20. Large-scale atomistic simulations of nanostructured materials based on divide-and-conquer density functional theory

    Directory of Open Access Journals (Sweden)

    Vashishta P.

    2011-05-01

    Full Text Available A linear-scaling algorithm based on a divide-and-conquer (DC scheme is designed to perform large-scale molecular-dynamics simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT. This scheme is applied to the thermite reaction at an Al/Fe2O3 interface. It is found that mass diffusion and reaction rate at the interface are enhanced by a concerted metal-oxygen flip mechanism. Preliminary simulations are carried out for an aluminum particle in water based on the conventional DFT, as a target system for large-scale DC-DFT simulations. A pair of Lewis acid and base sites on the aluminum surface preferentially catalyzes hydrogen production in a low activation-barrier mechanism found in the simulations

  1. Large-Scale Covariability Between Aerosol and Precipitation Over the 7-SEAS Region: Observations and Simulations

    Science.gov (United States)

    Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong; hide

    2012-01-01

    One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5

  2. Choosing the best partition of the output from a large-scale simulation

    Energy Technology Data Exchange (ETDEWEB)

    Challacombe, Chelsea Jordan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Casleton, Emily Michele [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    Data partitioning becomes necessary when a large-scale simulation produces more data than can be feasibly stored. The goal is to partition the data, typically so that every element belongs to one and only one partition, and store summary information about the partition, either a representative value plus an estimate of the error or a distribution. Once the partitions are determined and the summary information stored, the raw data is discarded. This process can be performed in-situ; meaning while the simulation is running. When creating the partitions there are many decisions that researchers must make. For instance, how to determine once an adequate number of partitions have been created, how are the partitions created with respect to dividing the data, or how many variables should be considered simultaneously. In addition, decisions must be made for how to summarize the information within each partition. Because of the combinatorial number of possible ways to partition and summarize the data, a method of comparing the different possibilities will help guide researchers into choosing a good partitioning and summarization scheme for their application.

  3. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  4. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  5. Planetary Structures And Simulations Of Large-scale Impacts On Mars

    Science.gov (United States)

    Swift, Damian; El-Dasher, B.

    2009-09-01

    The impact of large meteroids is a possible cause for isolated orogeny on bodies devoid of tectonic activity. On Mars, there is a significant, but not perfect, correlation between large, isolated volcanoes and antipodal impact craters. On Mercury and the Moon, brecciated terrain and other unusual surface features can be found at the antipodes of large impact sites. On Earth, there is a moderate correlation between long-lived mantle hotspots at opposite sides of the planet, with meteoroid impact suggested as a possible cause. If induced by impacts, the mechanisms of orogeny and volcanism thus appear to vary between these bodies, presumably because of differences in internal structure. Continuum mechanics (hydrocode) simulations have been used to investigate the response of planetary bodies to impacts, requiring assumptions about the structure of the body: its composition and temperature profile, and the constitutive properties (equation of state, strength, viscosity) of the components. We are able to predict theoretically and test experimentally the constitutive properties of matter under planetary conditions, with reasonable accuracy. To provide a reference series of simulations, we have constructed self-consistent planetary structures using simplified compositions (Fe core and basalt-like mantle), which turn out to agree surprisingly well with the moments of inertia. We have performed simulations of large-scale impacts, studying the transmission of energy to the antipodes. For Mars, significant antipodal heating to depths of a few tens of kilometers was predicted from compression waves transmitted through the mantle. Such heating is a mechanism for volcanism on Mars, possibly in conjunction with crustal cracking induced by surface waves. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  7. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  8. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  9. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  10. Properties of liquid clusters in large-scale molecular dynamics nucleation simulations

    International Nuclear Information System (INIS)

    Angélil, Raymond; Diemand, Jürg; Tanaka, Kyoko K.; Tanaka, Hidekazu

    2014-01-01

    We have performed large-scale Lennard-Jones molecular dynamics simulations of homogeneous vapor-to-liquid nucleation, with 10 9 atoms. This large number allows us to resolve extremely low nucleation rates, and also provides excellent statistics for cluster properties over a wide range of cluster sizes. The nucleation rates, cluster growth rates, and size distributions are presented in Diemand et al. [J. Chem. Phys. 139, 74309 (2013)], while this paper analyses the properties of the clusters. We explore the cluster temperatures, density profiles, potential energies, and shapes. A thorough understanding of the properties of the clusters is crucial to the formulation of nucleation models. Significant latent heat is retained by stable clusters, by as much as ΔkT = 0.1ε for clusters with size i = 100. We find that the clusters deviate remarkably from spherical—with ellipsoidal axis ratios for critical cluster sizes typically within b/c = 0.7 ± 0.05 and a/c = 0.5 ± 0.05. We examine cluster spin angular momentum, and find that it plays a negligible role in the cluster dynamics. The interfaces of large, stable clusters are thinner than planar equilibrium interfaces by 10%−30%. At the critical cluster size, the cluster central densities are between 5% and 30% lower than the bulk liquid expectations. These lower densities imply larger-than-expected surface areas, which increase the energy cost to form a surface, which lowers nucleation rates

  11. ROSA-IV Large Scale Test Facility (LSTF) system description for second simulated fuel assembly

    International Nuclear Information System (INIS)

    1990-10-01

    The ROSA-IV Program's Large Scale Test Facility (LSTF) is a test facility for integral simulation of thermal-hydraulic response of a pressurized water reactor (PWR) during small break loss-of-coolant accidents (LOCAs) and transients. In this facility, the PWR core nuclear fuel rods are simulated using electric heater rods. The simulated fuel assembly which was installed during the facility construction was replaced with a new one in 1988. The first test with this second simulated fuel assembly was conducted in December 1988. This report describes the facility configuration and characteristics as of this date (December 1988) including the new simulated fuel assembly design and the facility changes which were made during the testing with the first assembly as well as during the renewal of the simulated fuel assembly. (author)

  12. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    Science.gov (United States)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  13. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    Science.gov (United States)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  14. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues

    Directory of Open Access Journals (Sweden)

    Michele Farisco

    2018-04-01

    Full Text Available Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs, e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.

  15. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  16. Understanding water delivery performance in a large-scale irrigation system in Peru

    NARCIS (Netherlands)

    Vos, J.M.C.

    2005-01-01

    During a two-year field study the performance of the water delivery was evaluated in a large-scale irrigation system on the north coast of Peru. Flow measurements were carried out along the main canals, along two secondary canals, and in two tertiary blocks in the Chancay-Lambayeque irrigation

  17. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  18. The large-scale environment from cosmological simulations - I. The baryonic cosmic web

    Science.gov (United States)

    Cui, Weiguang; Knebe, Alexander; Yepes, Gustavo; Yang, Xiaohu; Borgani, Stefano; Kang, Xi; Power, Chris; Staveley-Smith, Lister

    2018-01-01

    Using a series of cosmological simulations that includes one dark-matter-only (DM-only) run, one gas cooling-star formation-supernova feedback (CSF) run and one that additionally includes feedback from active galactic nuclei (AGNs), we classify the large-scale structures with both a velocity-shear-tensor code (VWEB) and a tidal-tensor code (PWEB). We find that the baryonic processes have almost no impact on large-scale structures - at least not when classified using aforementioned techniques. More importantly, our results confirm that the gas component alone can be used to infer the filamentary structure of the universe practically un-biased, which could be applied to cosmology constraints. In addition, the gas filaments are classified with its velocity (VWEB) and density (PWEB) fields, which can theoretically connect to the radio observations, such as H I surveys. This will help us to bias-freely link the radio observations with dark matter distributions at large scale.

  19. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  20. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  1. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  2. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  3. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  4. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  5. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  6. The development of a capability for aerodynamic testing of large-scale wing sections in a simulated natural rain environment

    Science.gov (United States)

    Bezos, Gaudy M.; Cambell, Bryan A.; Melson, W. Edward

    1989-01-01

    A research technique to obtain large-scale aerodynamic data in a simulated natural rain environment has been developed. A 10-ft chord NACA 64-210 wing section wing section equipped with leading-edge and trailing-edge high-lift devices was tested as part of a program to determine the effect of highly-concentrated, short-duration rainfall on airplane performance. Preliminary dry aerodynamic data are presented for the high-lift configuration at a velocity of 100 knots and an angle of attack of 18 deg. Also, data are presented on rainfield uniformity and rainfall concentration intensity levels obtained during the calibration of the rain simulation system.

  7. Large-Scale Testing and High-Fidelity Simulation Capabilities at Sandia National Laboratories to Support Space Power and Propulsion

    International Nuclear Information System (INIS)

    Dobranich, Dean; Blanchat, Thomas K.

    2008-01-01

    Sandia National Laboratories, as a Department of Energy, National Nuclear Security Agency, has major responsibility to ensure the safety and security needs of nuclear weapons. As such, with an experienced research staff, Sandia maintains a spectrum of modeling and simulation capabilities integrated with experimental and large-scale test capabilities. This expertise and these capabilities offer considerable resources for addressing issues of interest to the space power and propulsion communities. This paper presents Sandia's capability to perform thermal qualification (analysis, test, modeling and simulation) using a representative weapon system as an example demonstrating the potential to support NASA's Lunar Reactor System

  8. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  9. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  10. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  11. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  12. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  13. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    Science.gov (United States)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  14. Large Scale Beam-beam Simulations for the CERN LHC using Distributed Computing

    CERN Document Server

    Herr, Werner; McIntosh, E; Schmidt, F

    2006-01-01

    We report on a large scale simulation of beam-beam effects for the CERN Large Hadron Collider (LHC). The stability of particles which experience head-on and long-range beam-beam effects was investigated for different optical configurations and machine imperfections. To cover the interesting parameter space required computing resources not available at CERN. The necessary resources were available in the LHC@home project, based on the BOINC platform. At present, this project makes more than 60000 hosts available for distributed computing. We shall discuss our experience using this system during a simulation campaign of more than six months and describe the tools and procedures necessary to ensure consistent results. The results from this extended study are presented and future plans are discussed.

  15. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  16. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    Science.gov (United States)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  17. System Dynamics Simulation of Large-Scale Generation System for Designing Wind Power Policy in China

    Directory of Open Access Journals (Sweden)

    Linna Hou

    2015-01-01

    Full Text Available This paper focuses on the impacts of renewable energy policy on a large-scale power generation system, including thermal power, hydropower, and wind power generation. As one of the most important clean energy, wind energy has been rapidly developed in the world. But in recent years there is a serious waste of wind power equipment and investment in China leading to many problems in the industry from wind power planning to its integration. One way overcoming the difficulty is to analyze the influence of wind power policy on a generation system. This paper builds a system dynamics (SD model of energy generation to simulate the results of wind energy generation policies based on a complex system. And scenario analysis method is used to compare the effectiveness and efficiency of these policies. The case study shows that the combinations of lower portfolio goal and higher benchmark price and those of higher portfolio goal and lower benchmark price have large differences in both effectiveness and efficiency. On the other hand, the combinations of uniformly lower or higher portfolio goal and benchmark price have similar efficiency, but different effectiveness. Finally, an optimal policy combination can be chosen on the basis of policy analysis in the large-scale power system.

  18. Coupled climate model simulations of Mediterranean winter cyclones and large-scale flow patterns

    Directory of Open Access Journals (Sweden)

    B. Ziv

    2013-03-01

    Full Text Available The study aims to evaluate the ability of global, coupled climate models to reproduce the synoptic regime of the Mediterranean Basin. The output of simulations of the 9 models included in the IPCC CMIP3 effort is compared to the NCEP-NCAR reanalyzed data for the period 1961–1990. The study examined the spatial distribution of cyclone occurrence, the mean Mediterranean upper- and lower-level troughs, the inter-annual variation and trend in the occurrence of the Mediterranean cyclones, and the main large-scale circulation patterns, represented by rotated EOFs of 500 hPa and sea level pressure. The models reproduce successfully the two maxima in cyclone density in the Mediterranean and their locations, the location of the average upper- and lower-level troughs, the relative inter-annual variation in cyclone occurrences and the structure of the four leading large scale EOFs. The main discrepancy is the models' underestimation of the cyclone density in the Mediterranean, especially in its western part. The models' skill in reproducing the cyclone distribution is found correlated with their spatial resolution, especially in the vertical. The current improvement in model spatial resolution suggests that their ability to reproduce the Mediterranean cyclones would be improved as well.

  19. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  20. Large-scale introduction of wind power stations in the Swedish grid: a simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L

    1978-08-01

    This report describes a simulation study on the factors to be considered if wind power were to be introduced to the south Swedish power grid on a large scale. The simulations are based upon a heuristic power generation planning model, developed for the purpose. The heuristic technique reflects the actual running strategies of a big power company with suitable accuracy. All simulations refer to certain typical days in 1976 to which all wind data and system characteristics are related. The installed amount of wind power will not be subject to optimization. All differences between planned and real wind power generation is equalized by regulation of the hydro power. The simulations made differ according to how the installed amount of wind power is handled in the power generation planning. The simulations indicate that the power system examined could well bear an introduction of wind power up to a level of 20% of the total power installed. This result is of course valid only for the days examined and does not necessarily apply to the present day structure of the system.

  1. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  2. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  3. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  4. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    Science.gov (United States)

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  5. Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2014-01-01

    Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.

  6. Commercial applications of large-scale Research and Development computer simulation technologies

    International Nuclear Information System (INIS)

    Kuok Mee Ling; Pascal Chen; Wen Ho Lee

    1998-01-01

    The potential commercial applications of two large-scale R and D computer simulation technologies are presented. One such technology is based on the numerical solution of the hydrodynamics equations, and is embodied in the two-dimensional Eulerian code EULE2D, which solves the hydrodynamic equations with various models for the equation of state (EOS), constitutive relations and fracture mechanics. EULE2D is an R and D code originally developed to design and analyze conventional munitions for anti-armor penetrations such as shaped charges, explosive formed projectiles, and kinetic energy rods. Simulated results agree very well with actual experiments. A commercial application presented here is the design and simulation of shaped charges for oil and gas well bore perforation. The other R and D simulation technology is based on the numerical solution of Maxwell's partial differential equations of electromagnetics in space and time, and is implemented in the three-dimensional code FDTD-SPICE, which solves Maxwell's equations in the time domain with finite-differences in the three spatial dimensions and calls SPICE for information when nonlinear active devices are involved. The FDTD method has been used in the radar cross-section modeling of military aircrafts and many other electromagnetic phenomena. The coupling of FDTD method with SPICE, a popular circuit and device simulation program, provides a powerful tool for the simulation and design of microwave and millimeter-wave circuits containing nonlinear active semiconductor devices. A commercial application of FDTD-SPICE presented here is the simulation of a two-element active antenna system. The simulation results and the experimental measurements are in excellent agreement. (Author)

  7. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  8. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  9. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  10. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  11. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  12. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Directory of Open Access Journals (Sweden)

    C. M. R. Mateo

    2017-10-01

    Full Text Available Global-scale river models (GRMs are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC is assumed, simulation results deteriorate with finer spatial resolution; Nash–Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  13. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    Science.gov (United States)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  14. Representative elements: A step to large-scale fracture system simulation

    International Nuclear Information System (INIS)

    Clemo, T.M.

    1987-01-01

    Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs

  15. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    Science.gov (United States)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  16. Proceedings of joint meeting of the 6th simulation science symposium and the NIFS collaboration research 'large scale computer simulation'

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-03-01

    Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)

  17. Parallelization of a beam dynamics code and first large scale radio frequency quadrupole simulations

    Directory of Open Access Journals (Sweden)

    J. Xu

    2007-01-01

    Full Text Available The design and operation support of hadron (proton and heavy-ion linear accelerators require substantial use of beam dynamics simulation tools. The beam dynamics code TRACK has been originally developed at Argonne National Laboratory (ANL to fulfill the special requirements of the rare isotope accelerator (RIA accelerator systems. From the beginning, the code has been developed to make it useful in the three stages of a linear accelerator project, namely, the design, commissioning, and operation of the machine. To realize this concept, the code has unique features such as end-to-end simulations from the ion source to the final beam destination and automatic procedures for tuning of a multiple charge state heavy-ion beam. The TRACK code has become a general beam dynamics code for hadron linacs and has found wide applications worldwide. Until recently, the code has remained serial except for a simple parallelization used for the simulation of multiple seeds to study the machine errors. To speed up computation, the TRACK Poisson solver has been parallelized. This paper discusses different parallel models for solving the Poisson equation with the primary goal to extend the scalability of the code onto 1024 and more processors of the new generation of supercomputers known as BlueGene (BG/L. Domain decomposition techniques have been adapted and incorporated into the parallel version of the TRACK code. To demonstrate the new capabilities of the parallelized TRACK code, the dynamics of a 45 mA proton beam represented by 10^{8} particles has been simulated through the 325 MHz radio frequency quadrupole and initial accelerator section of the proposed FNAL proton driver. The results show the benefits and advantages of large-scale parallel computing in beam dynamics simulations.

  18. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  19. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  20. Development of a self-consistent lightning NOx simulation in large-scale 3-D models

    Science.gov (United States)

    Luo, Chao; Wang, Yuhang; Koshak, William J.

    2017-03-01

    We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.

  1. Large-scale and Long-duration Simulation of a Multi-stage Eruptive Solar Event

    Science.gov (United States)

    Jiang, chaowei; Hu, Qiang; Wu, S. T.

    2015-04-01

    We employ a data-driven 3D MHD active region evolution model by using the Conservation Element and Solution Element (CESE) numerical method. This newly developed model retains the full MHD effects, allowing time-dependent boundary conditions and time evolution studies. The time-dependent simulation is driven by measured vector magnetograms and the method of MHD characteristics on the bottom boundary. We have applied the model to investigate the coronal magnetic field evolution of AR11283 which was characterized by a pre-existing sigmoid structure in the core region and multiple eruptions, both in relatively small and large scales. We have succeeded in producing the core magnetic field structure and the subsequent eruptions of flux-rope structures (see https://dl.dropboxusercontent.com/u/96898685/large.mp4 for an animation) as the measured vector magnetograms on the bottom boundary evolve in time with constant flux emergence. The whole process, lasting for about an hour in real time, compares well with the corresponding SDO/AIA and coronagraph imaging observations. From these results, we show the capability of the model, largely data-driven, that is able to simulate complex, topological, and highly dynamic active region evolutions. (We acknowledge partial support of NSF grants AGS 1153323 and AGS 1062050, and data support from SDO/HMI and AIA teams).

  2. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data

    Science.gov (United States)

    Ikegami, Takashi; Mototake, Yoh-ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-11-01

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  3. Test-particle simulations of SEP propagation in IMF with large-scale fluctuations

    Science.gov (United States)

    Kelly, J.; Dalla, S.; Laitinen, T.

    2012-11-01

    The results of full-orbit test-particle simulations of SEPs propagating through an IMF which exhibits large-scale fluctuations are presented. A variety of propagation conditions are simulated - scatter-free, and scattering with mean free path, λ, of 0.3 and 2.0 AU - and the cross-field transport of SEPs is investigated. When calculating cross-field displacements the Parker spiral geometry is accounted for and the role of magnetic field expansion is taken into account. It is found that transport across the magnetic field is enhanced in the λ =0.3 AU and λ =2 AU cases, compared to the scatter-free case, with the λ =2 AU case in particular containing outlying particles that had strayed a large distance across the IMF. Outliers are catergorized by means of Chauvenet's criterion and it is found that typically between 1 and 2% of the population falls within this category. The ratio of latitudinal to longitudinal diffusion coefficient perpendicular to the magnetic field is typically 0.2, suggesting that transport in latitude is less efficient.

  4. Large-scale conformational changes of Trypanosoma cruzi proline racemase predicted by accelerated molecular dynamics simulation.

    Directory of Open Access Journals (Sweden)

    César Augusto F de Oliveira

    2011-10-01

    Full Text Available Chagas' disease, caused by the protozoan parasite Trypanosoma cruzi (T. cruzi, is a life-threatening illness affecting 11-18 million people. Currently available treatments are limited, with unacceptable efficacy and safety profiles. Recent studies have revealed an essential T. cruzi proline racemase enzyme (TcPR as an attractive candidate for improved chemotherapeutic intervention. Conformational changes associated with substrate binding to TcPR are believed to expose critical residues that elicit a host mitogenic B-cell response, a process contributing to parasite persistence and immune system evasion. Characterization of the conformational states of TcPR requires access to long-time-scale motions that are currently inaccessible by standard molecular dynamics simulations. Here we describe advanced accelerated molecular dynamics that extend the effective simulation time and capture large-scale motions of functional relevance. Conservation and fragment mapping analyses identified potential conformational epitopes located in the vicinity of newly identified transient binding pockets. The newly identified open TcPR conformations revealed by this study along with knowledge of the closed to open interconversion mechanism advances our understanding of TcPR function. The results and the strategy adopted in this work constitute an important step toward the rationalization of the molecular basis behind the mitogenic B-cell response of TcPR and provide new insights for future structure-based drug discovery.

  5. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  6. A long-term, continuous simulation approach for large-scale flood risk assessments

    Science.gov (United States)

    Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno

    2014-05-01

    The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in

  7. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  8. Lattice models for large-scale simulations of coherent wave scattering

    Science.gov (United States)

    Wang, Shumin; Teixeira, Fernando L.

    2004-01-01

    Lattice approximations for partial differential equations describing physical phenomena are commonly used for the numerical simulation of many problems otherwise intractable by pure analytical approaches. The discretization inevitably leads to many of the original symmetries to be broken or modified. In the case of Maxwell’s equations for example, invariance and isotropy of the speed of light in vacuum is invariably lost because of the so-called grid dispersion. Since it is a cumulative effect, grid dispersion is particularly harmful for the accuracy of results of large-scale simulations of scattering problems. Grid dispersion is usually combated by either increasing the lattice resolution or by employing higher-order schemes with larger stencils for the space and time derivatives. Both alternatives lead to increased computational cost to simulate a problem of a given physical size. Here, we introduce a general approach to develop lattice approximations with reduced grid dispersion error for a given stencil (and hence at no additional computational cost). The present approach is based on first obtaining stencil coefficients in the Fourier domain that minimize the maximum grid dispersion error for wave propagation at all directions (minimax sense). The resulting coefficients are then expanded into a Taylor series in terms of the frequency variable and incorporated into time-domain (update) equations after an inverse Fourier transformation. Maximally flat (Butterworth) or Chebyshev filters are subsequently used to minimize the wave speed variations for a given frequency range of interest. The use of such filters also allows for the adjustment of the grid dispersion characteristics so as to minimize not only the local dispersion error but also the accumulated phase error in a frequency range of interest.

  9. Overview of large scale experiments performed within the LBB project in the Czech Republic

    Energy Technology Data Exchange (ETDEWEB)

    Kadecka, P.; Lauerova, D. [Nuclear Research Institute, Rez (Czechoslovakia)

    1997-04-01

    During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, a brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.

  10. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Tianwu; Yu, Linglin; Zhang, Chong [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Ma, Jianpeng, E-mail: jpma@bcm.tmc.edu [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Verna and Marrs McLean Department of Biochemistry and Molecular Biology, Baylor College of Medicine, One Baylor Plaza, BCM-125, Houston, Texas 77030 (United States)

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  11. Algebraic mesh generation for large scale viscous-compressible aerodynamic simulation

    International Nuclear Information System (INIS)

    Smith, R.E.

    1984-01-01

    Viscous-compressible aerodynamic simulation is the numerical solution of the compressible Navier-Stokes equations and associated boundary conditions. Boundary-fitted coordinate systems are well suited for the application of finite difference techniques to the Navier-Stokes equations. An algebraic approach to boundary-fitted coordinate systems is one where an explicit functional relation describes a mesh on which a solution is obtained. This approach has the advantage of rapid-precise mesh control. The basic mathematical structure of three algebraic mesh generation techniques is described. They are transfinite interpolation, the multi-surface method, and the two-boundary technique. The Navier-Stokes equations are transformed to a computational coordinate system where boundary-fitted coordinates can be applied. Large-scale computation implies that there is a large number of mesh points in the coordinate system. Computation of viscous compressible flow using boundary-fitted coordinate systems and the application of this computational philosophy on a vector computer are presented

  12. Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research

    2017-11-03

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.

  13. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  14. Efficient graph-based dynamic load-balancing for parallel large-scale agent-based traffic simulation

    NARCIS (Netherlands)

    Xu, Y.; Cai, W.; Aydt, H.; Lees, M.; Tolk, A.; Diallo, S.Y.; Ryzhov, I.O.; Yilmaz, L.; Buckley, S.; Miller, J.A.

    2014-01-01

    One of the issues of parallelizing large-scale agent-based traffic simulations is partitioning and load-balancing. Traffic simulations are dynamic applications where the distribution of workload in the spatial domain constantly changes. Dynamic load-balancing at run-time has shown better efficiency

  15. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Nunes, Suzana Pereira; Amy, Gary L.

    2014-01-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  16. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo

    2014-04-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  17. Understanding Large-scale Structure in the SSA22 Protocluster Region Using Cosmological Simulations

    Science.gov (United States)

    Topping, Michael W.; Shapley, Alice E.; Steidel, Charles C.; Naoz, Smadar; Primack, Joel R.

    2018-01-01

    We investigate the nature and evolution of large-scale structure within the SSA22 protocluster region at z = 3.09 using cosmological simulations. A redshift histogram constructed from current spectroscopic observations of the SSA22 protocluster reveals two separate peaks at z = 3.065 (blue) and z = 3.095 (red). Based on these data, we report updated overdensity and mass calculations for the SSA22 protocluster. We find {δ }b,{gal}=4.8+/- 1.8 and {δ }r,{gal}=9.5+/- 2.0 for the blue and red peaks, respectively, and {δ }t,{gal}=7.6+/- 1.4 for the entire region. These overdensities correspond to masses of {M}b=(0.76+/- 0.17)× {10}15{h}-1 {M}ȯ , {M}r=(2.15+/- 0.32)× {10}15{h}-1 {M}ȯ , and {M}t=(3.19+/- 0.40)× {10}15{h}-1 {M}ȯ for the red, blue, and total peaks, respectively. We use the Small MultiDark Planck (SMDPL) simulation to identify comparably massive z∼ 3 protoclusters, and uncover the underlying structure and ultimate fate of the SSA22 protocluster. For this analysis, we construct mock redshift histograms for each simulated z∼ 3 protocluster, quantitatively comparing them with the observed SSA22 data. We find that the observed double-peaked structure in the SSA22 redshift histogram corresponds not to a single coalescing cluster, but rather the proximity of a ∼ {10}15{h}-1 {M}ȯ protocluster and at least one > {10}14{h}-1 {M}ȯ cluster progenitor. Such associations in the SMDPL simulation are easily understood within the framework of hierarchical clustering of dark matter halos. We finally find that the opportunity to observe such a phenomenon is incredibly rare, with an occurrence rate of 7.4{h}3 {{{Gpc}}}-3. Based on data obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration, and was made possible by the generous financial support of the W.M. Keck Foundation.

  18. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella

    2017-11-02

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device\\'s surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  19. Performance of the first Japanese large-scale facility for radon inhalation experiments with small animals

    International Nuclear Information System (INIS)

    Ishimori, Y.; Mitsunobu, F.; Yamaoka, K.; Tanaka, H.; Kataoka, T.; Sakoda, A.

    2011-01-01

    A radon test facility for small animals was developed in order to increase the statistical validity of differences of the biological response in various radon environments. This paper illustrates the performances of that facility, the first large-scale facility of its kind in Japan. The facility has a capability to conduct approximately 150 mouse-scale tests at the same time. The apparatus for exposing small animals to radon has six animal chamber groups with five independent cages each. Different radon concentrations in each animal chamber group are available. Because the first target of this study is to examine the in vivo behaviour of radon and its effects, the major functions to control radon and to eliminate thoron were examined experimentally. Additionally, radon progeny concentrations and their particle size distributions in the cages were also examined experimentally to be considered in future projects. (authors)

  20. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella; Fu, Hui-chun; He, Jr-Hau; Fratalocchi, Andrea

    2017-01-01

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device's surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  1. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  2. Large-scale micromagnetic simulation of Nd-Fe-B sintered magnets with Dy-rich shell structures

    Directory of Open Access Journals (Sweden)

    T. Oikawa

    2016-05-01

    Full Text Available Large-scale micromagnetic simulations have been performed using the energy minimization method on a model with structural features similar to those of Dy grain boundary diffusion (GBD-processed sintered magnets. Coercivity increases as a linear function of the anisotropy field of the Dy-rich shell, which is independent of Dy composition in the core as long as the shell thickness is greater than about 15 nm. This result shows that the Dy contained in the initial sintered magnets prior to the GBD process is not essential for enhancing coercivity. Magnetization reversal patterns indicate that coercivity is strongly influenced by domain wall pinning at the grain boundary. This observation is found to be consistent with the one-dimensional pinning theory.

  3. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  4. Performance Evaluation of Hadoop-based Large-scale Network Traffic Analysis Cluster

    Directory of Open Access Journals (Sweden)

    Tao Ran

    2016-01-01

    Full Text Available As Hadoop has gained popularity in big data era, it is widely used in various fields. The self-design and self-developed large-scale network traffic analysis cluster works well based on Hadoop, with off-line applications running on it to analyze the massive network traffic data. On purpose of scientifically and reasonably evaluating the performance of analysis cluster, we propose a performance evaluation system. Firstly, we set the execution times of three benchmark applications as the benchmark of the performance, and pick 40 metrics of customized statistical resource data. Then we identify the relationship between the resource data and the execution times by a statistic modeling analysis approach, which is composed of principal component analysis and multiple linear regression. After training models by historical data, we can predict the execution times by current resource data. Finally, we evaluate the performance of analysis cluster by the validated predicting of execution times. Experimental results show that the predicted execution times by trained models are within acceptable error range, and the evaluation results of performance are accurate and reliable.

  5. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  6. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  7. Numerical Simulation of Unsteady Large Scale Separated Flow around Oscillating Airfoil

    OpenAIRE

    Isogai, Koji; 磯貝, 紘二

    1991-01-01

    Numerical simulations of dynamic stall phenomenon of NACA0012 airfoil oscillating in pitch near static stalling angle are performed by using the compressible Navier-Stokes equations. In the present computations, a TVD scheme and an algebraic turbulence model are employed for the simulations of the unsteady separated flows at Reynolds number of 1.1x105. The hysteresis loops of the unsteady pitching moment during dynamic stall are compared with the existing experimental data. The flow pattern a...

  8. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    Science.gov (United States)

    Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-11-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.

  9. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    International Nuclear Information System (INIS)

    Kurucz, Charles N.; Waite, Thomas D.; Otano, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-01-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1 ) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60 Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater

  10. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    Science.gov (United States)

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscal...

  11. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  12. Large-scale molecular dynamics simulations of self-assembling systems.

    Science.gov (United States)

    Klein, Michael L; Shinoda, Wataru

    2008-08-08

    Relentless increases in the size and performance of multiprocessor computers, coupled with new algorithms and methods, have led to novel applications of simulations across chemistry. This Perspective focuses on the use of classical molecular dynamics and so-called coarse-grain models to explore phenomena involving self-assembly in complex fluids and biological systems.

  13. Crystallisation of a Lennard-Jones fluid by large scale molecular dynamics simulation

    International Nuclear Information System (INIS)

    Snook, I.

    1998-01-01

    Full text: The evolution of the structure of a large system of atoms interacting via a Lennard-Jones pair potential was simulated by the use of the Molecular Dynamics computer simulation technique. The system was initially equilibrated in the one phase region of the phase diagram at a temperature above critical then a temperature quench was performed which placed the system in a region were the single fluid phase was unstable. Quenches to below the triple point temperature gave rise to crystallisation The mechanism and final morphology is shown to depend strongly on the starting conditions e.g. the starting density

  14. Large scale gas injection test (Lasgit) performed at the Aespoe Hard Rock Laboratory. Summary report 2008

    International Nuclear Information System (INIS)

    Cuss, R.J.; Harrington, J.F.; Noy, D.J.

    2010-02-01

    This report describes the set-up, operation and observations from the first 1,385 days (3.8 years) of the large scale gas injection test (Lasgit) experiment conducted at the Aespoe Hard Rock Laboratory. During this time the bentonite buffer has been artificially hydrated and has given new insight into the evolution of the buffer. After 2 years (849 days) of artificial hydration a canister filter was identified to perform a series of hydraulic and gas tests, a period that lasted 268 days. The results from the gas test showed that the full-scale bentonite buffer behaved in a similar way to previous laboratory experiments. This confirms the up-scaling of laboratory observations with the addition of considerable information on the stress responses throughout the deposition hole. During the gas testing stage, the buffer was continued to artificially hydrate. Hydraulic results, from controlled and uncontrolled events, show that the buffer continues to mature and has yet to reach full maturation. Lasgit has yielded high quality data relating to the hydration of the bentonite and the evolution in hydrogeological properties adjacent to the deposition hole. The initial hydraulic and gas injection tests confirm the correct working of all control and data acquisition systems. Lasgit has been in successful operation for in excess of 1,385 days

  15. Large scale gas injection test (Lasgit) performed at the Aespoe Hard Rock Laboratory. Summary report 2008

    Energy Technology Data Exchange (ETDEWEB)

    Cuss, R.J.; Harrington, J.F.; Noy, D.J. (British Geological Survey (United Kingdom))

    2010-02-15

    This report describes the set-up, operation and observations from the first 1,385 days (3.8 years) of the large scale gas injection test (Lasgit) experiment conducted at the Aespoe Hard Rock Laboratory. During this time the bentonite buffer has been artificially hydrated and has given new insight into the evolution of the buffer. After 2 years (849 days) of artificial hydration a canister filter was identified to perform a series of hydraulic and gas tests, a period that lasted 268 days. The results from the gas test showed that the full-scale bentonite buffer behaved in a similar way to previous laboratory experiments. This confirms the up-scaling of laboratory observations with the addition of considerable information on the stress responses throughout the deposition hole. During the gas testing stage, the buffer was continued to artificially hydrate. Hydraulic results, from controlled and uncontrolled events, show that the buffer continues to mature and has yet to reach full maturation. Lasgit has yielded high quality data relating to the hydration of the bentonite and the evolution in hydrogeological properties adjacent to the deposition hole. The initial hydraulic and gas injection tests confirm the correct working of all control and data acquisition systems. Lasgit has been in successful operation for in excess of 1,385 days

  16. High-Performance Carbon Dioxide Electrocatalytic Reduction by Easily Fabricated Large-Scale Silver Nanowire Arrays.

    Science.gov (United States)

    Luan, Chuhao; Shao, Yang; Lu, Qi; Gao, Shenghan; Huang, Kai; Wu, Hui; Yao, Kefu

    2018-05-17

    An efficient and selective catalyst is in urgent need for carbon dioxide electroreduction and silver is one of the promising candidates with affordable costs. Here we fabricated large-scale vertically standing Ag nanowire arrays with high crystallinity and electrical conductivity as carbon dioxide electroreduction catalysts by a simple nanomolding method that was usually considered not feasible for metallic crystalline materials. A great enhancement of current densities and selectivity for CO at moderate potentials was achieved. The current density for CO ( j co ) of Ag nanowire array with 200 nm in diameter was more than 2500 times larger than that of Ag foil at an overpotential of 0.49 V with an efficiency over 90%. The origin of enhanced performances are attributed to greatly increased electrochemically active surface area (ECSA) and higher intrinsic activity compared to those of polycrystalline Ag foil. More low-coordinated sites on the nanowires which can stabilize the CO 2 intermediate better are responsible for the high intrinsic activity. In addition, the impact of surface morphology that induces limited mass transportation on reaction selectivity and efficiency of nanowire arrays with different diameters was also discussed.

  17. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  18. Towards Agent-Based Simulation of Emerging and Large-Scale Social Networks. Examples of the Migrant Crisis and MMORPGs

    Directory of Open Access Journals (Sweden)

    Schatten, Markus

    2016-10-01

    Full Text Available Large-scale agent based simulation of social networks is described in the context of the migrant crisis in Syria and the EU as well as massively multi-player on-line role playing games (MMORPG. The recipeWorld system by Terna and Fontana is proposed as a possible solution to simulating large-scale social networks. The initial system has been re-implemented using the Smart Python multi-Agent Development Environment (SPADE and Pyinteractive was used for visualization. We present initial models of simulation that we plan to develop further in future studies. Thus this paper is research in progress that will hopefully establish a novel agent-based modelling system in the context of the ModelMMORPG project.

  19. Multi-parameter decoupling and slope tracking control strategy of a large-scale high altitude environment simulation test cabin

    Directory of Open Access Journals (Sweden)

    Li Ke

    2014-12-01

    Full Text Available A large-scale high altitude environment simulation test cabin was developed to accurately control temperatures and pressures encountered at high altitudes. The system was developed to provide slope-tracking dynamic control of the temperature–pressure two-parameter and overcome the control difficulties inherent to a large inertia lag link with a complex control system which is composed of turbine refrigeration device, vacuum device and liquid nitrogen cooling device. The system includes multi-parameter decoupling of the cabin itself to avoid equipment damage of air refrigeration turbine caused by improper operation. Based on analysis of the dynamic characteristics and modeling for variations in temperature, pressure and rotation speed, an intelligent controller was implemented that includes decoupling and fuzzy arithmetic combined with an expert PID controller to control test parameters by decoupling and slope tracking control strategy. The control system employed centralized management in an open industrial ethernet architecture with an industrial computer at the core. The simulation and field debugging and running results show that this method can solve the problems of a poor anti-interference performance typical for a conventional PID and overshooting that can readily damage equipment. The steady-state characteristics meet the system requirements.

  20. Three-dimensional simulation of large-scale structure in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Centrella, J.; Melott, A.L.

    1983-09-15

    High and low density cloud-in-cell models were used to simulate the nonlinear growth of adiabatic perturbations in collisionless matter to demonstrate the development of a cellular structure in the universe. Account was taken of a short wvelength cutoff in collisionless matter, with a focus on resolving filaments and low density pancakes. The calculations were performed with a Friedmann-Robertson-Walker model, and the gravitational potential of dark matter was obtained through solution of the Poisson equation. The simulation began with z between 100-1000, and initial particle velocities were set at zero. Spherically symmetric voids were observed to form, then colide and interact. Sufficient particles were employed to avoid depletion during nonlinear collapse. No galaxies formed during the epoch studied, which has implications for the significance of dark, baryonic matter in the present universe.

  1. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  2. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  3. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    Science.gov (United States)

    Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.

    2018-03-01

    from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.

  4. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  5. Large scale statistics for computational verification of grain growth simulations with experiments

    International Nuclear Information System (INIS)

    Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.

    2002-01-01

    by curvature driven motion. This method utilizes gradientweighted moving finite elements (GWMFE) combined with algorithms for performing topological reconnections on the evolving mesh. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the EBSD measurements. Using the same technique, we obtained 5170-grain data from a thin Aluminum film with a columnar grain structure and compared the computational results with experiments.

  6. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    Science.gov (United States)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  7. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Hoshi, T [Department of Applied Mathematics and Physics, Tottori University, Tottori 680-8550 (Japan); Fujiwara, T [Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST-JST) (Japan)

    2009-02-11

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  8. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  9. H1 Grid production tool for large scale Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lobodzinski, B; Wissing, Ch [DESY, Hamburg (Germany); Bystritskaya, E; Vorobiew, M [ITEP, Moscow (Russian Federation); Karbach, T M [University of Dortmund (Germany); Mitsyn, S [JINR, Moscow (Russian Federation); Mudrinic, M, E-mail: bogdan.lobodzinski@desy.d [VINS, Belgrad (Serbia)

    2010-04-01

    The H1 Collaboration at HERA has entered the period of high precision analyses based on the final data sample. These analyses require a massive production of simulated Monte Carlo (MC) events. The H1 MC framework (H1MC) is a software for mass MC production on the LCG Grid infrastructure and on a local batch system created by H1 Collaboration. The aim of the tool is a full automatisation of the MC production workflow including management of the MC jobs on the Grid down to copying of the resulting files from the Grid to the H1 mass storage tape device. The H1 MC framework has modular structure, delegating a specific task to each module, including task specific to the H1 experiment: Automatic building of steer and input files, simulation of the H1 detector, reconstruction of particle tracks and post processing calculation. Each module provides data or functionality needed by other modules via a local database. The Grid jobs created for detector simulation and reconstruction from generated MC input files are fully independent and fault-tolerant for 32 and 64-bit LCG Grid architecture and in Grid running state they can be continuously monitored using Relational Grid Monitoring Architecture (R-GMA) service. To monitor the full production chain and detect potential problems, regular checks of the job state are performed using the local database and the Service Availability Monitoring (SAM) framework. The improved stability of the system has resulted in a dramatic increase in the production rate, which exceeded two billion MC events in 2008.

  10. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  11. Simple concentration-dependent pair interaction model for large-scale simulations of Fe-Cr alloys

    International Nuclear Information System (INIS)

    Levesque, Maximilien; Martinez, Enrique; Fu, Chu-Chun; Nastar, Maylise; Soisson, Frederic

    2011-01-01

    This work is motivated by the need for large-scale simulations to extract physical information on the iron-chromium system that is a binary model alloy for ferritic steels used or proposed in many nuclear applications. From first-principles calculations and the experimental critical temperature we build a new energetic rigid lattice model based on pair interactions with concentration and temperature dependence. Density functional theory calculations in both norm-conserving and projector augmented-wave approaches have been performed. A thorough comparison of these two different ab initio techniques leads to a robust parametrization of the Fe-Cr Hamiltonian. Mean-field approximations and Monte Carlo calculations are then used to account for temperature effects. The predictions of the model are in agreement with the most recent phase diagram at all temperatures and compositions. The solubility of Cr in Fe below 700 K remains in the range of about 6 to 12%. It reproduces the transition between the ordering and demixing tendency and the spinodal decomposition limits are also in agreement with the values given in the literature.

  12. Effect of grain boundary phase on the magnetization reversal process of nanocrystalline magnet using large-scale micromagnetic simulation

    Directory of Open Access Journals (Sweden)

    Hiroshi Tsukahara

    2018-05-01

    Full Text Available We investigated the effects of grain boundary phases on magnetization reversal in permanent magnets by performing large-scale micromagnetic simulations based on Landau–Lifshitz–Gilbert equation under a periodic boundary. We considered planar grain boundary phases parallel and perpendicular to an easy axis of the permanent magnet and assumed the saturation magnetization and exchange stiffness constant of the grain boundary phase to be 10% and 1%, respectively, for Nd2Fe14B grains. The grain boundary phase parallel to the easy axis effectively inhibits propagation of magnetization reversal. In contrast, the domain wall moves across the grain boundary perpendicular to the easy axis. These properties of the domain wall motion are explained by dipole interaction, which stabilizes the antiparallel magnetic configuration in the direction perpendicular to the magnetization orientation. On the other hand, the magnetization is aligned in the same direction by the dipole interaction parallel to the magnetization orientation. This anisotropy of the effect of the grain boundary phase shows that improvement of the grain boundary phase perpendicular to the easy axis effectively enhances the coercivity of permanent magnets.

  13. Efficient simulations of large-scale structure in modified gravity cosmologies with comoving Lagrangian acceleration

    Science.gov (United States)

    Valogiannis, Georgios; Bean, Rachel

    2017-05-01

    We implement an adaptation of the cola approach, a hybrid scheme that combines Lagrangian perturbation theory with an N-body approach, to model nonlinear collapse in chameleon and symmetron modified gravity models. Gravitational screening is modeled effectively through the attachment of a suppression factor to the linearized Klein-Gordon equations. The adapted cola approach is benchmarked, with respect to an N-body code both for the Λ cold dark matter (Λ CDM ) scenario and for the modified gravity theories. It is found to perform well in the estimation of the dark matter power spectra, with consistency of 1% to k ˜2.5 h /Mpc . Redshift space distortions are shown to be effectively modeled through a Lorentzian parametrization with a velocity dispersion fit to the data. We find that cola performs less well in predicting the halo mass functions but has consistency, within 1 σ uncertainties of our simulations, in the relative changes to the mass function induced by the modified gravity models relative to Λ CDM . The results demonstrate that cola, proposed to enable accurate and efficient, nonlinear predictions for Λ CDM , can be effectively applied to a wider set of cosmological scenarios, with intriguing properties, for which clustering behavior needs to be understood for upcoming surveys such as LSST, DESI, Euclid, and WFIRST.

  14. Investigation of the Contamination Control in a Cleaning Room with a Moving AGV by 3D Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2013-01-01

    Full Text Available The motions of the airflow induced by the movement of an automatic guided vehicle (AGV in a cleanroom are numerically studied by large-scale simulation. For this purpose, numerical experiments scheme based on domain decomposition method is designed. Compared with the related past research, the high Reynolds number is treated by large-scale computation in this work. A domain decomposition Lagrange-Galerkin method is employed to approximate the Navier-Stokes equations and the convection diffusion equation; the stiffness matrix is symmetric and an incomplete balancing preconditioned conjugate gradient (PCG method is employed to solve the linear algebra system iteratively. The end wall effects are readily viewed, and the necessity of the extension to 3 dimensions is confirmed. The effect of the high efficiency particular air (HEPA filter on contamination control is studied and the proper setting of the speed of the clean air flow is also investigated. More details of the recirculation zones are revealed by the 3D large-scale simulation.

  15. Contributions to large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2003-01-01

    : once all processes are started and the controllers are in the Initial state, go to Running state; Luke warm stop: reverse of the luke warm start phase; Warm start: once all processes are alive and all controllers are the Configured state, go to the Running state; Warm stop: Reverse of the warm start phase. It was shown that the online system is capable of running on 111 PCs controlling a 3 or 4 level hierarchy of up to 111 run controllers. Furthermore, parallel partitions with a 2 level hierarchy of 11 run controllers were run successfully demonstrating the principle of partition independence. The set of incremental configurations was run sequentially to study the system behaviour with increasing numbers of controllers and PCs. Aspects of inter-operability and correct system behaviour for a large scale was verified with the partition containing 111 controllers which represent more than a factor 10 in size compared to its current use in test beam. In order to start studies of the online system for the next order of magnitude, the 4-level super partitions with 300 and 1000 crate controllers were exercised. Limits were found on the level of communication and state transition coordination which will be investigated further. (authors)

  16. Large-Scale Reactive Atomistic Simulation of Shock-induced Initiation Processes in Energetic Materials

    Science.gov (United States)

    Thompson, Aidan

    2013-06-01

    Initiation in energetic materials is fundamentally dependent on the interaction between a host of complex chemical and mechanical processes, occurring on scales ranging from intramolecular vibrations through molecular crystal plasticity up to hydrodynamic phenomena at the mesoscale. A variety of methods (e.g. quantum electronic structure methods (QM), non-reactive classical molecular dynamics (MD), mesoscopic continuum mechanics) exist to study processes occurring on each of these scales in isolation, but cannot describe how these processes interact with each other. In contrast, the ReaxFF reactive force field, implemented in the LAMMPS parallel MD code, allows us to routinely perform multimillion-atom reactive MD simulations of shock-induced initiation in a variety of energetic materials. This is done either by explicitly driving a shock-wave through the structure (NEMD) or by imposing thermodynamic constraints on the collective dynamics of the simulation cell e.g. using the Multiscale Shock Technique (MSST). These MD simulations allow us to directly observe how energy is transferred from the shockwave into other processes, including intramolecular vibrational modes, plastic deformation of the crystal, and hydrodynamic jetting at interfaces. These processes in turn cause thermal excitation of chemical bonds leading to initial chemical reactions, and ultimately to exothermic formation of product species. Results will be presented on the application of this approach to several important energetic materials, including pentaerythritol tetranitrate (PETN) and ammonium nitrate/fuel oil (ANFO). In both cases, we validate the ReaxFF parameterizations against QM and experimental data. For PETN, we observe initiation occurring via different chemical pathways, depending on the shock direction. For PETN containing spherical voids, we observe enhanced sensitivity due to jetting, void collapse, and hotspot formation, with sensitivity increasing with void size. For ANFO, we

  17. Large-scale numerical simulations of star formation put to the test

    DEFF Research Database (Denmark)

    Frimann, Søren; Jørgensen, Jes Kristian; Haugbølle, Troels

    2016-01-01

    (SEDs), calculated from large-scalenumerical simulations, to observational studies, thereby aiding in boththe interpretation of the observations and in testing the fidelity ofthe simulations. Methods: The adaptive mesh refinement code,RAMSES, is used to simulate the evolution of a 5 pc × 5 pc ×5 pc...... to calculate evolutionary tracers Tbol andLsmm/Lbol. It is shown that, while the observeddistributions of the tracers are well matched by the simulation, theygenerally do a poor job of tracking the protostellar ages. Disks formearly in the simulation, with 40% of the Class 0 protostars beingencircled by one...

  18. Simulation of large scale air detritiation operations by computer modeling and bench-scale experimentation

    International Nuclear Information System (INIS)

    Clemmer, R.G.; Land, R.H.; Maroni, V.A.; Mintz, J.M.

    1978-01-01

    Although some experience has been gained in the design and construction of 0.5 to 5 m 3 /s air-detritiation systems, little information is available on the performance of these systems under realistic conditions. Recently completed studies at ANL have attempted to provide some perspective on this subject. A time-dependent computer model was developed to study the effects of various reaction and soaking mechanisms that could occur in a typically-sized fusion reactor building (approximately 10 5 m 3 ) following a range of tritium releases (2 to 200 g). In parallel with the computer study, a small (approximately 50 liter) test chamber was set up to investigate cleanup characteristics under conditions which could also be simulated with the computer code. Whereas results of computer analyses indicated that only approximately 10 -3 percent of the tritium released to an ambient enclosure should be converted to tritiated water, the bench-scale experiments gave evidence of conversions to water greater than 1%. Furthermore, although the amounts (both calculated and observed) of soaked-in tritium are usually only a very small fraction of the total tritium release, the soaked tritium is significant, in that its continuous return to the enclosure extends the cleanup time beyond the predicted value in the absence of any soaking mechanisms

  19. Challenges in analysing and visualizing large-scale molecular dynamics simulations: domain and defect formation in lung surfactant monolayers

    International Nuclear Information System (INIS)

    Mendez-Villuendas, E; Baoukina, S; Tieleman, D P

    2012-01-01

    Molecular dynamics simulations have rapidly grown in size and complexity, as computers have become more powerful and molecular dynamics software more efficient. Using coarse-grained models like MARTINI system sizes of the order of 50 nm × 50 nm × 50 nm can be simulated on commodity clusters on microsecond time scales. For simulations of biological membranes and monolayers mimicking lung surfactant this enables large-scale transformation and complex mixtures of lipids and proteins. Here we use a simulation of a monolayer with three phospholipid components, cholesterol, lung surfactant proteins, water, and ions on a ten microsecond time scale to illustrate some current challenges in analysis. In the simulation, phase separation occurs followed by formation of a bilayer fold in which lipids and lung surfactant protein form a highly curved structure in the aqueous phase. We use Voronoi analysis to obtain detailed physical properties of the different components and phases, and calculate local mean and Gaussian curvatures of the bilayer fold.

  20. Age-related differences in the relations between individualised HRM and organisational performance: a large-scale employer survey

    NARCIS (Netherlands)

    Bal, P.M.; Dorenbosch, L.

    2015-01-01

    The current study aimed to investigate the relationship between individualised HRM practices and several measures of organisational performance, including the moderating role of employee age in these relationships. A large-scale representative study among 4,591 organisations in the Netherlands

  1. Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition

    International Nuclear Information System (INIS)

    Farnudi, Bahman; Vvedensky, Dimitri D

    2011-01-01

    Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.

  2. Transforming GIS data into functional road models for large-scale traffic simulation.

    Science.gov (United States)

    Wilkie, David; Sewall, Jason; Lin, Ming C

    2012-06-01

    There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques.

  3. Performance of large-scale helium refrigerators subjected to pulsed heat load from fusion devices

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, R.; Ghosh, P.; Chowdhury, K. [Cryogenic Engineering Centre, Indian Institute of Technology, Kharagpur (India)

    2012-07-01

    The immediate effect of pulsed heat load from fusion devices in helium refrigerators is wide variation in mass flow rate of low pressure stream returning to the cold-box. In this paper, a four expander based modified Claude cycle has been analyzed in quasi steady and dynamic simulations using Aspen HYSYS to identify critical equipment that may be affected due to such flow rate fluctuations at the return stream and their transient performance. Additional constraints on process parameters over steady state design have been identified. Suitable techniques for mitigation of fluctuation of return stream have also been explored. (author)

  4. Performance of large-scale helium refrigerators subjected to pulsed heat load from fusion devices

    International Nuclear Information System (INIS)

    Dutta, R.; Ghosh, P.; Chowdhury, K.

    2012-01-01

    The immediate effect of pulsed heat load from fusion devices in helium refrigerators is wide variation in mass flow rate of low pressure stream returning to the cold-box. In this paper, a four expander based modified Claude cycle has been analyzed in quasi steady and dynamic simulations using Aspen HYSYS to identify critical equipment that may be affected due to such flow rate fluctuations at the return stream and their transient performance. Additional constraints on process parameters over steady state design have been identified. Suitable techniques for mitigation of fluctuation of return stream have also been explored. (author)

  5. Large Scale Earth's Bow Shock with Northern IMF as Simulated by ...

    Indian Academy of Sciences (India)

    results with the available MHD simulations under same scaled solar wind. (SW) and (IMF) ... their effects in dissipating flow-energy, in heating matter, in accelerating particles to high, presumably ... such as hybrid models (Omidi et al. 2013 ...

  6. Performance Prediction for Large-Scale Nuclear Waste Repositories: Final Report

    International Nuclear Information System (INIS)

    Glassley, W E; Nitao, J J; Grant, W; Boulos, T N; Gokoffski, M O; Johnson, J W; Kercher, J R; Levatin, J A; Steefel, C I

    2001-01-01

    The goal of this project was development of a software package capable of utilizing terascale computational platforms for solving subsurface flow and transport problems important for disposal of high level nuclear waste materials, as well as for DOE-complex clean-up and stewardship efforts. We sought to develop a tool that would diminish reliance on abstracted models, and realistically represent the coupling between subsurface fluid flow, thermal effects and chemical reactions that both modify the physical framework of the rock materials and which change the rock mineralogy and chemistry of the migrating fluid. Providing such a capability would enhance realism in models and increase confidence in long-term predictions of performance. Achieving this goal also allows more cost-effective design and execution of monitoring programs needed to evaluate model results. This goal was successfully accomplished through the development of a new simulation tool (NUFT-C). This capability allows high resolution modeling of complex coupled thermal-hydrological-geochemical processes in the saturated and unsaturated zones of the Earth's crust. The code allows consideration of virtually an unlimited number of chemical species and minerals in a multi-phase, non-isothermal environment. Because the code is constructed to utilize the computational power of the tera-scale IBM ASCI computers, simulations that encompass large rock volumes and complex chemical systems can now be done without sacrificing spatial or temporal resolution. The code is capable of doing one-, two-, and three-dimensional simulations, allowing unprecedented evaluation of the evolution of rock properties and mineralogical and chemical change as a function of time. The code has been validated by comparing results of simulations to laboratory-scale experiments, other benchmark codes, field scale experiments, and observations in natural systems. The results of these exercises demonstrate that the physics and chemistry

  7. Generation of large scale urban environments to support advanced sensor and seeker simulation

    Science.gov (United States)

    Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan

    2009-05-01

    One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.

  8. Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models

    Science.gov (United States)

    Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.

    2018-01-01

    The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.

  9. Performance analysis on a large scale borehole ground source heat pump in Tianjin cultural centre

    Science.gov (United States)

    Yin, Baoquan; Wu, Xiaoting

    2018-02-01

    In this paper, the temperature distribution of the geothermal field for the vertical borehole ground-coupled heat pump was tested and analysed. Besides the borehole ground-coupled heat pump, the system composed of the ice storage, heat supply network and cooling tower. According to the operation data for nearly three years, the temperature constant zone is in the ground depth of 40m -120m with a temperature gradient of about 3.0°C/100m. The temperature of the soil dropped significantly in the heating season, increased significantly in the cooling season, and reinstated in the transitional season. With the energy balance design of the heating and cooling and the existence of the soil thermal inertia, the soil temperature stayed in a relative stable range and the ground source heat pump system was operated with a relative high efficiency. The geothermal source heat pump was shown to be applicable for large scale utilization.

  10. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  11. Performance of the improved version of Monte Carlo code A 3MCNP for large-scale shielding problems

    International Nuclear Information System (INIS)

    Omura, M.; Miyake, Y.; Hasegawa, T.; Ueki, K.; Sato, O.; Haghighat, A.; Sjoden, G. E.

    2005-01-01

    A 3MCNP (Automatic Adjoint Accelerated MCNP) is a revised version of the MCNP Monte Carlo code, which automatically prepares variance reduction parameters for the CADIS (Consistent Adjoint Driven Importance Sampling) methodology. Using a deterministic 'importance' (or adjoint) function, CADIS performs source and transport biasing within the weight-window technique. The current version of A 3MCNP uses the three-dimensional (3-D) Sn transport TORT code to determine a 3-D importance function distribution. Based on simulation of several real-life problems, it is demonstrated that A 3MCNP provides precise calculation results with a remarkably short computation time by using the proper and objective variance reduction parameters. However, since the first version of A 3MCNP provided only a point source configuration option for large-scale shielding problems, such as spent-fuel transport casks, a large amount of memory may be necessary to store enough points to properly represent the source. Hence, we have developed an improved version of A 3MCNP (referred to as A 3MCNPV) which has a volumetric source configuration option. This paper describes the successful use of A 3MCNPV for a concrete cask neutron and gamma-ray shielding problem, and a PWR dosimetry problem. (authors)

  12. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I. (University of California, San Diego); Winey, J. Michael (Washington State University); Gupta, Yogendra Mohan (Washington State University); Lane, J. Matthew D.; Ditmire, Todd (University of Texas at Austin); Quevedo, Hernan J. (University of Texas at Austin)

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  13. The UP modelling system for large scale hydrology: simulation of the Arkansas-Red River basin

    Directory of Open Access Journals (Sweden)

    C. G. Kilsby

    1999-01-01

    Full Text Available The UP (Upscaled Physically-based hydrological modelling system to the Arkansas-Red River basin (USA is designed for macro-scale simulations of land surface processes, and aims for a physical basis and, avoids the use of discharge records in the direct calibration of parameters. This is achieved in a two stage process: in the first stage parametrizations are derived from detailed modelling of selected representative small and then used in a second stage in which a simple distributed model is used to simulate the dynamic behaviour of the whole basin. The first stage of the process is described in a companion paper (Ewen et al., this issue, and the second stage of this process is described here. The model operated at an hourly time-step on 17-km grid squares for a two year simulation period, and represents all the important hydrological processes including regional aquifer recharge, groundwater discharge, infiltration- and saturation-excess runoff, evapotranspiration, snowmelt, overland and channel flow. Outputs from the model are discussed, and include river discharge at gauging stations and space-time fields of evaporation and soil moisture. Whilst the model efficiency assessed by comparison of simulated and observed discharge records is not as good as could be achieved with a model calibrated against discharge, there are considerable advantages in retaining a physical basis in applications to ungauged river basins and assessments of impacts of land use or climate change.

  14. A hybrid Genetic and Simulated Annealing Algorithm for Chordal Ring implementation in large-scale networks

    DEFF Research Database (Denmark)

    Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup

    2011-01-01

    The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology of the...

  15. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  16. Multi-Scale Fusion of Information for Uncertainty Quantification and Management in Large-Scale Simulations

    Science.gov (United States)

    2015-12-02

    of completely new nonlinear Malliavin calculus . This type of calculus is important for the analysis and simulation of stationary and/or “causal...been limited by the fact that it requires the solution of an optimization problem with noisy gradients . When using deterministic optimization schemes...under uncertainty. We tested new developments on nonlinear Malliavin calculus , combining reduced basis methods with ANOVA, model validation, on

  17. A large scale software system for simulation and design optimization of mechanical systems

    Science.gov (United States)

    Dopker, Bernhard; Haug, Edward J.

    1989-01-01

    The concept of an advanced integrated, networked simulation and design system is outlined. Such an advanced system can be developed utilizing existing codes without compromising the integrity and functionality of the system. An example has been used to demonstrate the applicability of the concept of the integrated system outlined here. The development of an integrated system can be done incrementally. Initial capabilities can be developed and implemented without having a detailed design of the global system. Only a conceptual global system must exist. For a fully integrated, user friendly design system, further research is needed in the areas of engineering data bases, distributed data bases, and advanced user interface design.

  18. ActivitySim: large-scale agent based activity generation for infrastructure simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gali, Emmanuel [Los Alamos National Laboratory; Eidenbenz, Stephan [Los Alamos National Laboratory; Mniszewski, Sue [Los Alamos National Laboratory; Cuellar, Leticia [Los Alamos National Laboratory; Teuscher, Christof [PORTLAND STATE UNIV

    2008-01-01

    The United States' Department of Homeland Security aims to model, simulate, and analyze critical infrastructure and their interdependencies across multiple sectors such as electric power, telecommunications, water distribution, transportation, etc. We introduce ActivitySim, an activity simulator for a population of millions of individual agents each characterized by a set of demographic attributes that is based on US census data. ActivitySim generates daily schedules for each agent that consists of a sequence of activities, such as sleeping, shopping, working etc., each being scheduled at a geographic location, such as businesses or private residences that is appropriate for the activity type and for the personal situation of the agent. ActivitySim has been developed as part of a larger effort to understand the interdependencies among national infrastructure networks and their demand profiles that emerge from the different activities of individuals in baseline scenarios as well as emergency scenarios, such as hurricane evacuations. We present the scalable software engineering principles underlying ActivitySim, the socia-technical modeling paradigms that drive the activity generation, and proof-of-principle results for a scenario in the Twin Cities, MN area of 2.6 M agents.

  19. Simulation of large-scale soil water systems using groundwater data and satellite based soil moisture

    Science.gov (United States)

    Kreye, Phillip; Meon, Günter

    2016-04-01

    Complex concepts for the physically correct depiction of dominant processes in the hydrosphere are increasingly at the forefront of hydrological modelling. Many scientific issues in hydrological modelling demand for additional system variables besides a simulation of runoff only, such as groundwater recharge or soil moisture conditions. Models that include soil water simulations are either very simplified or require a high number of parameters. Against this backdrop there is a heightened demand of observations to be used to calibrate the model. A reasonable integration of groundwater data or remote sensing data in calibration procedures as well as the identifiability of physically plausible sets of parameters is subject to research in the field of hydrology. Since this data is often combined with conceptual models, the given interfaces are not suitable for such demands. Furthermore, the application of automated optimisation procedures is generally associated with conceptual models, whose (fast) computing times allow many iterations of the optimisation in an acceptable time frame. One of the main aims of this study is to reduce the discrepancy between scientific and practical applications in the field of hydrological modelling. Therefore, the soil model DYVESOM (DYnamic VEgetation SOil Model) was developed as one of the primary components of the hydrological modelling system PANTA RHEI. DYVESOMs structure provides the required interfaces for the calibrations made at runoff, satellite based soil moisture and groundwater level. The model considers spatial and temporal differentiated feedback of the development of the vegetation on the soil system. In addition, small scale heterogeneities of soil properties (subgrid-variability) are parameterized by variation of van Genuchten parameters depending on distribution functions. Different sets of parameters are operated simultaneously while interacting with each other. The developed soil model is innovative regarding concept

  20. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    Science.gov (United States)

    2013-06-01

    exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in

  1. Two-dimensional simulation of the gravitational system dynamics and formation of the large-scale structure of the universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.; Kotok, E.V.; Novikov, I.D.; Polyudov, A.N.; Shandarin, S.F.; Sigov, Y.S.

    1980-01-01

    The results of a numerical experiment are given that describe the non-linear stages of the development of perturbations in gravitating matter density in the expanding Universe. This process simulates the formation of the large-scale structure of the Universe from an initially almost homogeneous medium. In the one- and two-dimensional cases of this numerical experiment the evolution of the system from 4096 point masses that interact gravitationally only was studied with periodic boundary conditions (simulation of the infinite space). The initial conditions were chosen that resulted from the theory of the evolution of small perturbations in the expanding Universe. The results of numerical experiments are systematically compared with the approximate analytic theory. The results of the calculations show that in the case of collisionless particles, as well as in the gas-dynamic case, the cellular structure appeared at the non-linear stage in the case of the adiabatic perturbations. The greater part of the matter is in thin layers that separate vast regions of low density. In a Robertson-Walker universe the cellular structure exists for a finite time and then fragments into a few compact objects. In the open Universe the cellular structure also exists if the amplitude of initial perturbations is large enough. But the following disruption of the cellular structure is more difficult because of too rapid an expansion of the Universe. The large-scale structure is frozen. (author)

  2. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  3. Neurite, a finite difference large scale parallel program for the simulation of electrical signal propagation in neurites under mechanical loading.

    Directory of Open Access Journals (Sweden)

    Julián A García-Grajales

    Full Text Available With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon

  4. Timetable-based simulation method for choice set generation in large-scale public transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Anderson, Marie Karen; Nielsen, Otto Anker

    2016-01-01

    The composition and size of the choice sets are a key for the correct estimation of and prediction by route choice models. While existing literature has posed a great deal of attention towards the generation of path choice sets for private transport problems, the same does not apply to public...... transport problems. This study proposes a timetable-based simulation method for generating path choice sets in a multimodal public transport network. Moreover, this study illustrates the feasibility of its implementation by applying the method to reproduce 5131 real-life trips in the Greater Copenhagen Area...... and to assess the choice set quality in a complex multimodal transport network. Results illustrate the applicability of the algorithm and the relevance of the utility specification chosen for the reproduction of real-life path choices. Moreover, results show that the level of stochasticity used in choice set...

  5. Contact area of rough spheres: Large scale simulations and simple scaling laws

    Science.gov (United States)

    Pastewka, Lars; Robbins, Mark O.

    2016-05-01

    We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the whole range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.

  6. Contact area of rough spheres: Large scale simulations and simple scaling laws

    Energy Technology Data Exchange (ETDEWEB)

    Pastewka, Lars, E-mail: lars.pastewka@kit.edu [Institute for Applied Materials & MicroTribology Center muTC, Karlsruhe Institute of Technology, Engelbert-Arnold-Straße 4, 76131 Karlsruhe (Germany); Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218 (United States); Robbins, Mark O., E-mail: mr@pha.jhu.edu [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218 (United States)

    2016-05-30

    We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the whole range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.

  7. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  8. Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    2000-01-01

    The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)

  9. Large Scale DD Simulation Results for Crystal Plasticity Parameters in Fe-Cr And Fe-Ni Systems

    Energy Technology Data Exchange (ETDEWEB)

    Zbib, Hussein M.; Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2012-04-30

    shear stress (CRSS) from the evolution of local dislocation and defects. In this report the focus is on the results obtained from large scale dislocation dynamics simulations. The effect of defect density, materials structure was investigated, and evolution laws are obtained. These results will form the bases for the development of evolution and hardening laws for a dislocation-based crystal plasticity framework. The hierarchical upscaling method being developed in this project can provide a guidance tool to evaluate performance of structural materials for next-generation nuclear reactors. Combined with other tools developed in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the models developed will have more impact in improving the reliability of current reactors and affordability of new reactors.

  10. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, William S. [Los Alamos National Laboratory; Bull, Jeffrey S. [Los Alamos National Laboratory; Wilcox, Trevor [Los Alamos National Laboratory; Bos, Randall J. [Los Alamos National Laboratory; Shao, Xuan-Min [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory; Costigan, Keeley R. [Los Alamos National Laboratory

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  11. Large-scale molecular dynamics simulations of shock waves in Laves crystals and icosahedral quasicrystals

    International Nuclear Information System (INIS)

    Roth, Johannes

    2002-01-01

    Quasicrystals and ordinary crystals both possess long-range translational order. But quasicrystals are aperiodic since their symmetry is non-crystallographic. The aim of this project is to study the behavior of shock waves in periodic and aperiodic structures and to compare the results. The expectation is that new types of defects are generated in the aperiodic materials. The materials studied are two models of (AlCu)Li quasicrystals and the C15 Laves phase, a low-order approximant of the quasicrystals. An elastic wave is found in the simulations up to a piston velocity of about up < 0.25 cl. Between 0.5 < up/cl < 0.5 the slope of elastic wave velocity slows down, and a new plastic wave is observed. Extended defect are generated, but no simple two-dimensional walls. The defect bands have finite width and a disordered structure. If the crystal is quenched a polycrystalline phase is obtained. For the quasicrystal the transformation is more complex since ring processes occur in the elastic regime already. Starting at about up < 0.5 cl a single plastic shock wave is observed. In this range all structures are destroyed completely

  12. Assessing the Performance of Large Scale Green Roofs and Their Impact on the Urban Microclimate

    Science.gov (United States)

    Smalls-Mantey, L.; Foti, R.; Montalto, F. A.

    2015-12-01

    In ultra-urban environments green roofs offer a feasible solution to add green infrastructure (GI) in neighborhoods where space is limited. Green roofs offer the typical advantages of urban GI such as stormwater reduction and management while providing direct benefits to the buildings on which they are installed through thermal protection and mitigation of temperature fluctuations. At 6.8 acres, the Jacob K. Javits Convention Center (JJCC) in New York City, hosts the second largest green roof in the United States. Since its installation in August 2013, the Sustainable Water Resource (SWRE) Laboratory at Drexel University has monitored the climate on and around the green roof by means of four weather stations situated on various roof and ground locations. Using two years of fine scale climatic data collected at the JJCC, this study explores the energy balance of a large scale green roof system. Temperature, radiation, evapotranspiration and wind profiles pre- and post- installation of the JJCC green roof were analyzed and compared across monitored locations, with the goal of identifying the impact of the green roof on the building and urban micro-climate. Our findings indicate that the presence of the green roof, not only altered the climatic conditions above the JJCC, but also had a measurable impact on the climatic profile of the areas immediately surrounding it. Furthermore, as a result of the mitigation of roof temperature fluctuations and of the cooling provided during warmer months, an improvement of the building thermal efficiency was contextually observed. Such findings support the installation of GI as an effective practice in urban settings and important in the discussion of key issues including energy conservation measures, carbon emission reductions and the mitigation of urban heat islands.

  13. Simulations of Large-scale WiFi-based Wireless Networks: Interdisciplinary Challenges and Applications

    OpenAIRE

    Nekovee, Maziar

    2008-01-01

    Wireless Fidelity (WiFi) is the fastest growing wireless technology to date. In addition to providing wire-free connectivity to the Internet WiFi technology also enables mobile devices to connect directly to each other and form highly dynamic wireless adhoc networks. Such distributed networks can be used to perform cooperative communication tasks such ad data routing and information dissemination in the absence of a fixed infrastructure. Furthermore, adhoc grids composed of wirelessly network...

  14. Not a load of rubbish: simulated field trials in large-scale containers.

    Science.gov (United States)

    Hohmann, M; Stahl, A; Rudloff, J; Wittkop, B; Snowdon, R J

    2016-09-01

    Assessment of yield performance under fluctuating environmental conditions is a major aim of crop breeders. Unfortunately, results from controlled-environment evaluations of complex agronomic traits rarely translate to field performance. A major cause is that crops grown over their complete lifecycle in a greenhouse or growth chamber are generally constricted in their root growth, which influences their response to important abiotic constraints like water or nutrient availability. To overcome this poor transferability, we established a plant growth system comprising large refuse containers (120 L 'wheelie bins') that allow detailed phenotyping of small field-crop populations under semi-controlled growth conditions. Diverse winter oilseed rape cultivars were grown at field densities throughout the crop lifecycle, in different experiments over 2 years, to compare seed yields from individual containers to plot yields from multi-environment field trials. We found that we were able to predict yields in the field with high accuracy from container-grown plants. The container system proved suitable for detailed studies of stress response physiology and performance in pre-breeding populations. Investment in automated large-container systems may help breeders improve field transferability of greenhouse experiments, enabling screening of pre-breeding materials for abiotic stress response traits with a positive influence on yield. © 2016 John Wiley & Sons Ltd.

  15. Detached eddy simulation of cyclic large scale fluctuations in a simplified engine setup

    International Nuclear Information System (INIS)

    Hasse, Christian; Sohm, Volker; Durst, Bodo

    2009-01-01

    Computational Fluid Dynamics using RANS-based modelling approaches have become an important tool in the internal combustion engine development and optimization process. However, these models cannot resolve cycle to cycle variations, which are an important aspect in the design of new combustion systems. In this study the feasibility of using a Detached Eddy Simulation (DES) SST model, which is a hybrid RANS/LES model, to predict cycle to cycle variations is investigated. In the near wall region or in regions where the grid resolution is not sufficiently fine to resolve smaller structures, the two-equation RANS SST model is used. In the other regions with higher grid resolution an LES model is applied. The case considered is a geometrically simplified engine, for which detailed experimental data for the ensemble averaged and single cycle velocity field are available from Boree et al. [Boree, J., Maurel, S., Bazile, R., 2002. Disruption of a compressed vortex, Physics of Fluids 14 (7), 2543-2556]. The fluid flow shows a strong tumbling motion, which is a major characteristic for modern turbo-charged, direct-injection gasoline engines. The general flow structure is analyzed first and the extent of the LES region and the amount of resolved fluctuations are discussed. Multiple consecutive cycles are computed and turbulent statistics of DES SST, URANS and the measured velocity field are compared for different piston positions. Cycle to cycle variations of the velocity field are analyzed for both computation and experiment with a special emphasis on the useability of the DES SST model to predict cyclic variations

  16. Backward-in-time methods to simulate large-scale transport and mixing in the ocean

    Science.gov (United States)

    Prants, S. V.

    2015-06-01

    In oceanography and meteorology, it is important to know not only where water or air masses are headed for, but also where they came from as well. For example, it is important to find unknown sources of oil spills in the ocean and of dangerous substance plumes in the atmosphere. It is impossible with the help of conventional ocean and atmospheric numerical circulation models to extrapolate backward from the observed plumes to find the source because those models cannot be reversed in time. We review here recently elaborated backward-in-time numerical methods to identify and study mesoscale eddies in the ocean and to compute where those waters came from to a given area. The area under study is populated with a large number of artificial tracers that are advected backward in time in a given velocity field that is supposed to be known analytically or numerically, or from satellite and radar measurements. After integrating advection equations, one gets positions of each tracer on a fixed day in the past and can identify from known destinations a particle positions at earlier times. The results provided show that the method is efficient, for example, in estimating probabilities to find increased concentrations of radionuclides and other pollutants in oceanic mesoscale eddies. The backward-in-time methods are illustrated in this paper with a few examples. Backward-in-time Lagrangian maps are applied to identify eddies in satellite-derived and numerically generated velocity fields and to document the pathways by which they exchange water with their surroundings. Backward-in-time trapping maps are used to identify mesoscale eddies in the altimetric velocity field with a risk to be contaminated by Fukushima-derived radionuclides. The results of simulations are compared with in situ mesurement of caesium concentration in sea water samples collected in a recent research vessel cruise in the area to the east of Japan. Backward-in-time latitudinal maps and the corresponding

  17. Light Condensation and Localization in Disordered Photonic Media: Theory and Large Scale ab initio Simulations

    KAUST Repository

    Toth, Laszlo Daniel

    2013-05-07

    Disordered photonics is the study of light in random media. In a disordered photonic medium, multiple scattering of light and coherence, together with the fundamental principle of reciprocity, produce a wide range of interesting phenomena, such as enhanced backscattering and Anderson localization of light. They are also responsible for the existence of modes in these random systems. It is known that analogous processes to Bose-Einstein condensation can occur in classical wave systems, too. Classical condensation has been studied in several contexts in photonics: pulse formation in lasers, mode-locking theory and coherent emission of disordered lasers. All these systems have the common theme of possessing a large ensemble of waves or modes, together with nonlinearity, dispersion or gain. In this work, we study light condensation and its connection with light localization in a disordered, passive dielectric medium. We develop a theory for the modes inside the disordered resonator, which combines the Feshbach projection technique with spin-glass theory and statistical physics. In particular, starting from the Maxwell’s equations, we map the system to a spherical p-spin model with p = 2. The spins are replaced by modes and the temperature is related to the fluctuations in the environment. We study the equilibrium thermodynamics of the system in a general framework and show that two distinct phases exist: a paramagnetic phase, where all the modes are randomly oscillating and a condensed phase, where the energy condensates on a single mode. The thermodynamic quantities can be explicitly interpreted and can also be computed from the disorder-averaged time domain correlation function. We launch an ab initio simulation campaign using our own code and the Shaheen supercomputer to test the theoretical predictions. We construct photonic samples of varying disorder and find computationally relevant ways to obtain the thermodynamic quantities. We observe the phase transition

  18. Interoperable mesh components for large-scale, distributed-memory simulations

    International Nuclear Information System (INIS)

    Devine, K; Leung, V; Diachin, L; Miller, M

    2009-01-01

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.

  19. Assessment of Vehicle Sizing, Energy Consumption and Cost Through Large Scale Simulation of Advanced Vehicle Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Moawad, Ayman [Argonne National Lab. (ANL), Argonne, IL (United States); Kim, Namdoo [Argonne National Lab. (ANL), Argonne, IL (United States); Shidore, Neeraj [Argonne National Lab. (ANL), Argonne, IL (United States); Rousseau, Aymeric [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) has been developing more energy-efficient and environmentally friendly highway transportation technologies that will enable America to use less petroleum. The long-term aim is to develop "leapfrog" technologies that will provide Americans with greater freedom of mobility and energy security, while lowering costs and reducing impacts on the environment. This report reviews the results of the DOE VTO. It gives an assessment of the fuel and light-duty vehicle technologies that are most likely to be established, developed, and eventually commercialized during the next 30 years (up to 2045). Because of the rapid evolution of component technologies, this study is performed every two years to continuously update the results based on the latest state-of-the-art technologies.

  20. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    Science.gov (United States)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  1. Large scale simulation numerical study of transition to turbulence in jets; Etude numerique par simulation des grandes echelles de la transition a la turbulence dans les jets

    Energy Technology Data Exchange (ETDEWEB)

    Urbin, Gerald [Institut National Polytechnique, 38 - Grenoble (France)

    1998-02-02

    This study highlights the potentialities of the numerical technique of large scale simulation in describing and understanding the turbulent flows in a complex geometry. Particularly, it is focussed on flows of free jet, confined jets and multiple jets of high solidity grid. Spatial simulations of the circular zone close to a free jet, of high Reynolds number were performed. In spite of an evident sensitivity to upstream conditions good agreement between our statistical predictions and different experimental measurements was obtained. The multiple coherent vortical structures implied in the transition to turbulence of the jet were found. At the same time, helical or annular axisymmetric vortices were observed. Also, an original vortical arrangement was evidenced, resulting from the alternating inclination and local pairing of these rings. It could been forced through an ad-hoc excitation which modifies subsequently drastically the jet development. When an axisymmetric excitation is imposed after formation of annular structures, pairs of counter-rotative longitudinal vortices occur and generate lateral jets. Their nature and presence in case of a helical excitation are discussed. An efficient method for controlling their number is developed. Then, one is studied the very low frequency periodic phenomenon of backward-facing transition to turbulence which develops in the confined jet and grid multiple jets (a phenomenon generic in numerous flows). It was found to depend not only on the characteristic of the re-circulation (pre-transition) zones but also on the upstream flow (zone of post-transition stagnation, pressure effect). Large scale transversal motions of the fluid have been found beginning from the grid. An interpretation of this phenomenon is suggested 193 refs., 109 figs.

  2. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    Science.gov (United States)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  3. ROSA-V large scale test facility (LSTF) system description for the third and fourth simulated fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Mitsuhiro; Nakamura, Hideo; Ohtsu, Iwao [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [and others

    2003-03-01

    The Large Scale Test Facility (LSTF) is a full-height and 1/48 volumetrically scaled test facility of the Japan Atomic Energy Research Institute (JAERI) for system integral experiments simulating the thermal-hydraulic responses at full-pressure conditions of a 1100 MWe-class pressurized water reactor (PWR) during small break loss-of-coolant accidents (SBLOCAs) and other transients. The LSTF can also simulate well a next-generation type PWR such as the AP600 reactor. In the fifth phase of the Rig-of-Safety Assessment (ROSA-V) Program, eighty nine experiments have been conducted at the LSTF with the third simulated fuel assembly until June 2001, and five experiments have been conducted with the newly-installed fourth simulated fuel assembly until December 2002. In the ROSA-V program, various system integral experiments have been conducted to certify effectiveness of both accident management (AM) measures in beyond design basis accidents (BDBAs) and improved safety systems in the next-generation reactors. In addition, various separate-effect tests have been conducted to verify and develop computer codes and analytical models to predict non-homogeneous and multi-dimensional phenomena such as heat transfer across the steam generator U-tubes under the presence of non-condensable gases in both current and next-generation reactors. This report presents detailed information of the LSTF system with the third and fourth simulated fuel assemblies for the aid of experiment planning and analyses of experiment results. (author)

  4. Data for Figures and Tables in "Impacts of Different Characterizations of Large-Scale Background on Simulated Regional-Scale Ozone Over the Continental U.S."

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains the data used in the Figures and Tables of the manuscript "Impacts of Different Characterizations of Large-Scale Background on Simulated...

  5. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  6. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  7. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  8. Performance analysis of large-scale applications based on wavefront algorithms

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors introduced a performance model for parallel, multidimensional, wavefront calculations with machine performance characterized using the LogGP framework. The model accounts for overlap in the communication and computation components. The agreement with experimental data is very good under a variety of model sizes, data partitioning, blocking strategies, and on three different parallel architectures. Using the model, the authors analyzed performance of a deterministic transport code on a hypothetical 100 Tflops future parallel system of interest to ASCI

  9. Performance of large-scale scientific applications on the IBM ASCI Blue-Pacific system

    International Nuclear Information System (INIS)

    Mirin, A.

    1998-01-01

    The IBM ASCI Blue-Pacific System is a scalable, distributed/shared memory architecture designed to reach multi-teraflop performance. The IBM SP pieces together a large number of nodes, each having a modest number of processors. The system is designed to accommodate a mixed programming model as well as a pure message-passing paradigm. We examine a number of applications on this architecture and evaluate their performance and scalability

  10. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  11. Non-destructive screening method for radiation hardened performance of large scale integration

    International Nuclear Information System (INIS)

    Zhou Dong; Xi Shanbin; Guo Qi; Ren Diyuan; Li Yudong; Sun Jing; Wen Lin

    2013-01-01

    The space radiation environment could induce radiation damage on the electronic devices. As the performance of commercial devices is generally superior to that of radiation hardened devices, it is necessary to screen out the devices with good radiation hardened performance from the commercial devices and applying these devices to space systems could improve the reliability of the systems. Combining the mathematical regression analysis with the different physical stressing experiments, we investigated the non-destructive screening method for radiation hardened performance of the integrated circuit. The relationship between the change of typical parameters and the radiation performance of the circuit was discussed. The irradiation-sensitive parameters were confirmed. The pluralistic linear regression equation toward the prediction of the radiation performance was established. Finally, the regression equations under stress conditions were verified by practical irradiation. The results show that the reliability and accuracy of the non-destructive screening method can be elevated by combining the mathematical regression analysis with the practical stressing experiment. (authors)

  12. Modeling electrochemical performance in large scale proton exchange membrane fuel cell stacks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J H [Los Alamos National Lab., NM (United States); Lalk, T R [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering; Appleby, A J [Center for Electrochemical Studies and Hydrogen Research, Texas Engineering Experimentation Station, Texas A and M Univ., College Station, TX (United States)

    1998-02-01

    The processes, losses, and electrical characteristics of a Membrane-Electrode Assembly (MEA) of a Proton Exchange Membrane Fuel Cell (PEMFC) are described. In addition, a technique for numerically modeling the electrochemical performance of a MEA, developed specifically to be implemented as part of a numerical model of a complete fuel cell stack, is presented. The technique of calculating electrochemical performance was demonstrated by modeling the MEA of a 350 cm{sup 2}, 125 cell PEMFC and combining it with a dynamic fuel cell stack model developed by the authors. Results from the demonstration that pertain to the MEA sub-model are given and described. These include plots of the temperature, pressure, humidity, and oxygen partial pressure distributions for the middle MEA of the modeled stack as well as the corresponding current produced by that MEA. The demonstration showed that models developed using this technique produce results that are reasonable when compared to established performance expectations and experimental results. (orig.)

  13. Large-scale performance and design for construction activity erosion control best management practices.

    Science.gov (United States)

    Faucette, L B; Scholl, B; Beighley, R E; Governo, J

    2009-01-01

    The National Pollutant Discharge Elimination System (NPDES) Phase II requires construction activities to have erosion and sediment control best management practices (BMPs) designed and installed for site storm water management. Although BMPs are specified on storm water pollution prevention plans (SWPPPs) as part of the construction general permit (GP), there is little evidence in the research literature as to how BMPs perform or should be designed. The objectives of this study were to: (i) comparatively evaluate the performance of common construction activity erosion control BMPs under a standardized test method, (ii) evaluate the performance of compost erosion control blanket thickness, (iii) evaluate the performance of compost erosion control blankets (CECBs) on a variety of slope angles, and (iv) determine Universal Soil Loss Equation (USLE) cover management factors (C factors) for these BMPs to assist site designers and engineers. Twenty-three erosion control BMPs were evaluated using American Society of Testing and Materials (ASTM) D-6459, standard test method for determination of ECB performance in protecting hill slopes from rainfall induced erosion, on 4:1 (H:V), 3:1, and 2:1 slopes. Soil loss reduction for treatments exposed to 5 cm of rainfall on a 2:1 slope ranged from-7 to 99%. For rainfall exposure of 10 cm, treatment soil loss reduction ranged from 8 to 99%. The 2.5 and 5 cm CECBs significantly reduced erosion on slopes up to 2:1, while CECBs or= 4:1 when rainfall totals reach 5 cm. Based on the soil loss results, USLE C factors ranged from 0.01 to 0.9. These performance and design criteria should aid site planners and designers in decision-making processes.

  14. Evaluation of the regional climate response in Australia to large-scale climate modes in the historical NARCliM simulations

    Science.gov (United States)

    Fita, L.; Evans, J. P.; Argüeso, D.; King, A.; Liu, Y.

    2017-10-01

    NARCliM (New South Wales (NSW)/Australian Capital Territory (ACT) Regional Climate Modelling project) is a regional climate modeling project for the Australian area. It is providing a comprehensive dynamically downscaled climate dataset for the CORDEX-AustralAsia region at 50-km resolution, and south-East Australia at a resolution of 10 km. The first phase of NARCliM produced 60-year long reanalysis driven regional simulations to allow evaluation of the regional model performance. This long control period (1950-2009) was used so that the model ability to capture the impact of large scale climate modes on Australian climate could be examined. Simulations are evaluated using a gridded observational dataset. Results show that using model independence as a criteria for choosing atmospheric model configuration from different possible sets of parameterizations may contribute to the regional climate models having different overall biases. The regional models generally capture the regional climate response to large-scale modes better than the driving reanalysis, though no regional model improves on all aspects of the simulated climate.

  15. Multiple Skills Underlie Arithmetic Performance: A Large-Scale Structural Equation Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Sarit Ashkenazi

    2017-12-01

    Full Text Available Current theoretical approaches point to the importance of several cognitive skills not specific to mathematics for the etiology of mathematics disorders (MD. In the current study, we examined the role of many of these skills, specifically: rapid automatized naming, attention, reading, and visual perception, on mathematics performance among a large group of college students (N = 1,322 with a wide range of arithmetic proficiency. Using factor analysis, we discovered that our data clustered to four latent variables 1 mathematics, 2 perception speed, 3 attention and 4 reading. In subsequent structural equation modeling, we found that the latent variable perception speed had a strong and meaningful effect on mathematics performance. Moreover, sustained attention, independent from the effect of the latent variable perception speed, had a meaningful, direct effect on arithmetic fact retrieval and procedural knowledge. The latent variable reading had a modest effect on mathematics performance. Specifically, reading comprehension, independent from the effect of the latent variable reading, had a meaningful direct effect on mathematics, and particularly on number line knowledge. Attention, tested by the attention network test, had no effect on mathematics, reading or perception speed. These results indicate that multiple factors can affect mathematics performance supporting a heterogeneous approach to mathematics. These results have meaningful implications for the diagnosis and intervention of pure and comorbid learning disorders.

  16. How brain asymmetry relates to performance – a large-scale dichotic listening study

    Directory of Open Access Journals (Sweden)

    Marco eHirnstein

    2014-01-01

    Full Text Available All major mental functions including language, spatial and emotional processing are lateralized but how strongly and to which hemisphere is subject to inter- and intraindividual variation. Relatively little, however, is known about how the degree and direction of lateralization affect how well the functions are carried out, i.e., how lateralization and task performance are related. The present study therefore examined the relationship between lateralization and performance in a dichotic listening (DL task for which we had data available from 1839 participants. In this task, consonant-vowel syllables are presented simultaneously to the left and right ear, such that each ear receives a different syllable. When asked which of the two they heard best, participants typically report more syllables from the right ear, which is a marker of left-hemispheric speech dominance. We calculated the degree of lateralization (based on the difference between correct left and right ear reports and correlated it with overall response accuracy (left plus right ear reports. In addition, we used reference models to control for statistical interdependency between left and right ear reports. The results revealed a u-shaped relationship between degree of lateralization and overall accuracy: the stronger the left or right ear advantage, the better the overall accuracy. This u-shaped asymmetry-performance relationship consistently emerged in males, females, right-/non-right-handers, and different age groups. Taken together, the present study demonstrates that performance on lateralized language functions depends on how strongly these functions are lateralized. The present study further stresses the importance of controlling for statistical interdependency when examining asymmetry-performance relationships in general.

  17. Performance on large-scale science tests: Item attributes that may impact achievement scores

    Science.gov (United States)

    Gordon, Janet Victoria

    Significant differences in achievement among ethnic groups persist on the eighth-grade science Washington Assessment of Student Learning (WASL). The WASL measures academic performance in science using both scenario and stand-alone question types. Previous research suggests that presenting target items connected to an authentic context, like scenario question types, can increase science achievement scores especially in underrepresented groups and thus help to close the achievement gap. The purpose of this study was to identify significant differences in performance between gender and ethnic subgroups by question type on the 2005 eighth-grade science WASL. MANOVA and ANOVA were used to examine relationships between gender and ethnic subgroups as independent variables with achievement scores on scenario and stand-alone question types as dependent variables. MANOVA revealed no significant effects for gender, suggesting that the 2005 eighth-grade science WASL was gender neutral. However, there were significant effects for ethnicity. ANOVA revealed significant effects for ethnicity and ethnicity by gender interaction in both question types. Effect sizes were negligible for the ethnicity by gender interaction. Large effect sizes between ethnicities on scenario question types became moderate to small effect sizes on stand-alone question types. This indicates the score advantage the higher performing subgroups had over the lower performing subgroups was not as large on stand-alone question types compared to scenario question types. A further comparison examined performance on multiple-choice items only within both question types. Similar achievement patterns between ethnicities emerged; however, achievement patterns between genders changed in boys' favor. Scenario question types appeared to register differences between ethnic groups to a greater degree than stand-alone question types. These differences may be attributable to individual differences in cognition

  18. A Hybrid Testbed for Performance Evaluation of Large-Scale Datacenter Networks

    DEFF Research Database (Denmark)

    Pilimon, Artur; Ruepp, Sarah Renée

    2018-01-01

    Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource-intensive enviro......Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource......-intensive environments must be properly tested and analyzed in order to make timely upgrades and transformations. However, a limited number of academic institutions and Research and Development companies have access to production scale DC Network (DCN) testing facilities, and resource-limited studies can produce...... misleading or inaccurate results. To address this problem, we introduce an alternative solution, which forms a solid base for a more realistic and comprehensive performance evaluation of different aspects of DCNs. It is based on the System-in-the-loop (SITL) concept, where real commercial DCN equipment...

  19. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  20. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    Science.gov (United States)

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  1. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    Science.gov (United States)

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  2. On the network protocol performance evaluation for large scale communication system of nuclear plant

    International Nuclear Information System (INIS)

    Song, K. S.; Lee, T. H.; Kim, H. R.; Kim, D. H.; Ku, I. S.

    1998-01-01

    Computer technology has been dramatically advanced and it is now natural to apply digital network technology into nuclear plants. Communication architecture for nuclear plant defines the coordination of safety reactor control, balance of plant, subsystem utilities, and plant monitoring functions, and how they are connected and their user interface to guarantee plant performance and guarantee safety requirements. Therefore, to implement a digital network for control and monitoring systems of advanced nuclear plant needs systematic design and evaluation procedures because of responsive and hard real-time process characteristics of nuclear plant. In this paper, we evaluate several digital network protocols in terms of network delay, link failure effects to hard real-time requirements with full scale traffic

  3. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  4. Performance Analysis of a Wind Turbine Driven Swash Plate Pump for Large Scale Offshore Applications

    International Nuclear Information System (INIS)

    Buhagiar, D; Sant, T

    2014-01-01

    This paper deals with the performance modelling and analysis of offshore wind turbine-driven hydraulic pumps. The concept consists of an open loop hydraulic system with the rotor main shaft directly coupled to a swash plate pump to supply pressurised sea water. A mathematical model is derived to cater for the steady state behaviour of entire system. A simplified model for the pump is implemented together with different control scheme options for regulating the rotor shaft power. A new control scheme is investigated, based on the combined use of hydraulic pressure and pitch control. Using a steady-state analysis, the study shows how the adoption of alternative control schemes in a the wind turbine-hydraulic pump system may result in higher energy yields than those from a conventional system with an electrical generator and standard pitch control for power regulation. This is in particular the case with the new control scheme investigated in this study that is based on the combined use of pressure and rotor blade pitch control

  5. Performance Analysis of an Updraft Tower System for Dry Cooling in Large-Scale Power Plants

    Directory of Open Access Journals (Sweden)

    Haotian Liu

    2017-11-01

    Full Text Available An updraft tower cooling system is assessed for elimination of water use associated with power plant heat rejection. Heat rejected from the power plant condenser is used to warm the air at the base of an updraft tower; buoyancy-driven air flows through a recuperative turbine inside the tower. The secondary loop, which couples the power plant condenser to a heat exchanger at the tower base, can be configured either as a constant-pressure pump cycle or a vapor compression cycle. The novel use of a compressor can elevate the air temperature in the tower base to increases the turbine power recovery and decrease the power plant condensing temperature. The system feasibility is evaluated by comparing the net power needed to operate the system versus alternative dry cooling schemes. A thermodynamic model coupling all system components is developed for parametric studies and system performance evaluation. The model predicts that constant-pressure pump cycle consumes less power than using a compressor; the extra compression power required for temperature lift is much larger than the gain in turbine power output. The updraft tower system with a pumped secondary loop can allow dry cooling with less power plant efficiency penalty compared to air-cooled condensers.

  6. Analytical Assessment of the Relationship between 100MWp Large-scale Grid-connected Photovoltaic Plant Performance and Meteorological Parameters

    Science.gov (United States)

    Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang

    2017-05-01

    This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.

  7. A large-scale mass casualty simulation to develop the non-technical skills medical students require for collaborative teamwork.

    Science.gov (United States)

    Jorm, Christine; Roberts, Chris; Lim, Renee; Roper, Josephine; Skinner, Clare; Robertson, Jeremy; Gentilcore, Stacey; Osomanski, Adam

    2016-03-08

    There is little research on large-scale complex health care simulations designed to facilitate student learning of non-technical skills in a team-working environment. We evaluated the acceptability and effectiveness of a novel natural disaster simulation that enabled medical students to demonstrate their achievement of the non-technical skills of collaboration, negotiation and communication. In a mixed methods approach, survey data were available from 117 students and a thematic analysis undertaken of both student qualitative comments and tutor observer participation data. Ninety three per cent of students found the activity engaging for their learning. Three themes emerged from the qualitative data: the impact of fidelity on student learning, reflexivity on the importance of non-technical skills in clinical care, and opportunities for collaborative teamwork. Physical fidelity was sufficient for good levels of student engagement, as was sociological fidelity. We demonstrated the effectiveness of the simulation in allowing students to reflect upon and evidence their acquisition of skills in collaboration, negotiation and communication, as well as situational awareness and attending to their emotions. Students readily identified emerging learning opportunities though critical reflection. The scenarios challenged students to work together collaboratively to solve clinical problems, using a range of resources including interacting with clinical experts. A large class teaching activity, framed as a simulation of a natural disaster is an acceptable and effective activity for medical students to develop the non-technical skills of collaboration, negotiation and communication, which are essential to team working. The design could be of value in medical schools in disaster prone areas, including within low resource countries, and as a feasible intervention for learning the non-technical skills that are needed for patient safety.

  8. Proceedings of joint meeting of the 6th simulation science symposium and the NIFS collaboration research 'large scale computer simulation'

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-03-01

    Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)

  9. Evaluating the potential of large-scale simulations to predict carbon fluxes of terrestrial ecosystems over a European Eddy Covariance network

    International Nuclear Information System (INIS)

    Balzarolo, M.; Boussetta, S.; Balsamo, G.; Beljaars, A.; Maignan, F.; Chevallier, F.; Poulter, B.

    2014-01-01

    This paper reports a comparison between large scale simulations of three different land surface models (LSMs), ORCHIDEE, ISBA-A-gs and CTESSEL, forced with the same meteorological data, and compared with the carbon fluxes measured at 32 eddy covariance (EC) flux tower sites in Europe. The results show that the three simulations have the best performance for forest sites and the poorest performance for cropland and grassland sites. In addition, the three simulations have difficulties capturing the seasonality of Mediterranean and sub-tropical biomes, characterized by dry summers. This reduced simulation performance is also reflected in deficiencies in diagnosed light-use efficiency (LUE) and vapour pressure deficit (VPD) dependencies compared to observations. Shortcomings in the forcing data may also play a role. These results indicate that more research is needed on the LUE and VPD functions for Mediterranean and sub-tropical biomes. Finally, this study highlights the importance of correctly representing phenology (i.e. leaf area evolution) and management (i.e. rotation-irrigation for cropland, and grazing-harvesting for grassland) to simulate the carbon dynamics of European ecosystems and the importance of ecosystem-level observations in model development and validation. (authors)

  10. Simulation of the large-scale offshore-wind farms including HVDC-grid connections using the simulation tool VIAvento

    Energy Technology Data Exchange (ETDEWEB)

    Bartelt, R.; Heising, C.; Ni, B. [Avasition GmbH, Dortmund (Germany); Zadeh, M. Koochack; Lebioda, T.J.; Jung, J. [TenneT Offshore GmbH, Bayreuth (Germany)

    2012-07-01

    Within the framework of a research project the stability of the offshore grid especially in terms of sub-harmonic stability for the likely future extension stage of the offshore grids i.e. having parallel connection of two or more HVDC links and for certain operating scenarios e.g. overload scenario will be investigated. For this purpose, a comprehensive scenario-based assessment in time domain is unavoidable. Within this paper, the simulation tool VIAvento is briefly presented which allows for these comprehensive time-domain simulations taking the special characteristics of power-electronic assets into account. The core maxims of VIAvento are presented. Afterwards, the capability of VIAvento is demonstrated with simulation results of two wind farms linked via a HVDC grid connection system (160 converters and two HVDC stations in modular multilevel converter topology). (orig.)

  11. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  12. Mediterranean Thermohaline Response to Large-Scale Winter Atmospheric Forcing in a High-Resolution Ocean Model Simulation

    Science.gov (United States)

    Cusinato, Eleonora; Zanchettin, Davide; Sannino, Gianmaria; Rubino, Angelo

    2018-04-01

    Large-scale circulation anomalies over the North Atlantic and Euro-Mediterranean regions described by dominant climate modes, such as the North Atlantic Oscillation (NAO), the East Atlantic pattern (EA), the East Atlantic/Western Russian (EAWR) and the Mediterranean Oscillation Index (MOI), significantly affect interannual-to-decadal climatic and hydroclimatic variability in the Euro-Mediterranean region. However, whereas previous studies assessed the impact of such climate modes on air-sea heat and freshwater fluxes in the Mediterranean Sea, the propagation of these atmospheric forcing signals from the surface toward the interior and the abyss of the Mediterranean Sea remains unexplored. Here, we use a high-resolution ocean model simulation covering the 1979-2013 period to investigate spatial patterns and time scales of the Mediterranean thermohaline response to winter forcing from NAO, EA, EAWR and MOI. We find that these modes significantly imprint on the thermohaline properties in key areas of the Mediterranean Sea through a variety of mechanisms. Typically, density anomalies induced by all modes remain confined in the upper 600 m depth and remain significant for up to 18-24 months. One of the clearest propagation signals refers to the EA in the Adriatic and northern Ionian seas: There, negative EA anomalies are associated to an extensive positive density response, with anomalies that sink to the bottom of the South Adriatic Pit within a 2-year time. Other strong responses are the thermally driven responses to the EA in the Gulf of Lions and to the EAWR in the Aegean Sea. MOI and EAWR forcing of thermohaline properties in the Eastern Mediterranean sub-basins seems to be determined by reinforcement processes linked to the persistency of these modes in multiannual anomalous states. Our study also suggests that NAO, EA, EAWR and MOI could critically interfere with internal, deep and abyssal ocean dynamics and variability in the Mediterranean Sea.

  13. Temporal sequencing of throughfall drop generation as revealed by use of a large-scale rainfall simulator

    Science.gov (United States)

    Nanko, K.; Levia, D. F., Jr.; Iida, S.; SUN, X.; Shinohara, Y.; Sakai, N.

    2017-12-01

    Scientists have been interested in throughfall drop size and its distribution because of its importance to soil erosion and the forest water balance. An indoor experiment was employed to deepen our understanding of throughfall drop generation processes to promote better management of forested ecosystems. The indoor experiment provides a unique opportunity to examine an array of constant rainfall intensities that are ideal conditions to pick up the effect of changing intensities and not found in the fields. Throughfall drop generation was examined for three species- Cryptomeria japonica D. Don (Japanese cedar), Chamaecyparis obtusa (Siebold & Zucc.) Endl. (Japanese cypress), and Zelkova serrata Thunb. (Japanese zelkova)- under both leafed and leafless conditions in the large-scale rainfall simulator in the National Research Institute for Earth Science and Disaster Resilience (Tsukuba, Japan) at varying rainfall intensities ranging from15 to 100 mm h-1. Drop size distributions of the applied rainfall and throughfall were measured simultaneously by 20 laser disdrometers. Utilizing the drop size dataset, throughfall was separated into three components: free throughfall, canopy drip, and splash throughfall. The temporal sequencing of the throughfall components were analyzed on a 1-min interval during each experimental run. The throughfall component percentage and drop size of canopy drip differed among tree species and rainfall intensities and by elapsed time from the beginning of the rainfall event. Preliminary analysis revealed that the time differences to produce branch drip as compared to leaf (or needle) drip was partly due to differential canopy wet-up processes and the disappearance of branch drips due to canopy saturation, leading to dissimilar throughfall drop size distributions beneath the various tree species examined. This research was supported by JSPS Invitation Fellowship for Research in Japan (Grant No.: S16088) and JSPS KAKENHI (Grant No.: JP15H05626).

  14. Infrastructure for large-scale quality-improvement projects: early lessons from North Carolina Improving Performance in Practice.

    Science.gov (United States)

    Newton, Warren P; Lefebvre, Ann; Donahue, Katrina E; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3) quality-improvement consultants (QICs), or practice facilitators; (4) learning networks; and (5) alignment of incentives. We emphasized a community-based strategy and developing a statewide infrastructure. Results are reported from the first 2 years of the North Carolina Improving Performance in Practice (IPIP) project. A coalition was formed to include professional societies, North Carolina AHEC, Community Care of North Carolina, insurers, and other organizations. Wave One started with 18 practices in 2 of 9 regions of the state. Quality-improvement consultants recruited practices. Over 80 percent of practices attended all quarterly regional meetings. In 9 months, almost all diabetes measures improved, and a bundled asthma measure improved from 33 to 58 percent. Overall, the magnitude of improvement was clinically and statistically significant (P = .001). Quality improvements were maintained on review 1 year later. Wave Two has spread to 103 practices in all 9 regions of the state, with 42 additional practices beginning the enrollment process. Large-scale health care quality improvement is feasible, when broadly supported by statewide leadership and community infrastructure. Practice-collected data and lack of a control group are limitations of the study design. Future priorities include maintaining improved sustainability for practices and communities. Our long-term goal is to transform all 2000 primary-care practices in our state.

  15. Comparative study of large scale simulation of underground explosions inalluvium and in fractured granite using stochastic characterization

    Science.gov (United States)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2014-12-01

    This work describes a methodology used for large scale modeling of wave propagation fromunderground explosions conducted at the Nevada Test Site (NTS) in two different geological settings:fractured granitic rock mass and in alluvium deposition. We show that the discrete nature of rockmasses as well as the spatial variability of the fabric of alluvium is very important to understand groundmotions induced by underground explosions. In order to build a credible conceptual model of thesubsurface we integrated the geological, geomechanical and geophysical characterizations conductedduring recent test at the NTS as well as historical data from the characterization during the undergroundnuclear test conducted at the NTS. Because detailed site characterization is limited, expensive and, insome instances, impossible we have numerically investigated the effects of the characterization gaps onthe overall response of the system. We performed several computational studies to identify the keyimportant geologic features specific to fractured media mainly the joints; and those specific foralluvium porous media mainly the spatial variability of geological alluvium facies characterized bytheir variances and their integral scales. We have also explored common key features to both geologicalenvironments such as saturation and topography and assess which characteristics affect the most theground motion in the near-field and in the far-field. Stochastic representation of these features based onthe field characterizations have been implemented in Geodyn and GeodynL hydrocodes. Both codeswere used to guide site characterization efforts in order to provide the essential data to the modelingcommunity. We validate our computational results by comparing the measured and computed groundmotion at various ranges. This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344.

  16. Performing a Large-Scale Modal Test on the B2 Stand Crane at NASA's Stennis Space Center

    Science.gov (United States)

    Stasiunas, Eric C.; Parks, Russel A.; Sontag, Brendan D.

    2018-01-01

    A modal test of NASA's Space Launch System (SLS) Core Stage is scheduled to occur at the Stennis Space Center B2 test stand. A derrick crane with a 150-ft long boom, located at the top of the stand, will be used to suspend the Core Stage in order to achieve defined boundary conditions. During this suspended modal test, it is expected that dynamic coupling will occur between the crane and the Core Stage. Therefore, a separate modal test was performed on the B2 crane itself, in order to evaluate the varying dynamic characteristics and correlate math models of the crane. Performing a modal test on such a massive structure was challenging and required creative test setup and procedures, including implementing both AC and DC accelerometers, and performing both classical hammer and operational modal analysis. This paper describes the logistics required to perform this large-scale test, as well as details of the test setup, the modal test methods used, and an overview and application of the results.

  17. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    National Research Council Canada - National Science Library

    Yang, Kyoung

    2005-01-01

    This final report summarizes the progress during the Phase I SBIR project entitled "Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays...

  18. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF-extension phase.

    Science.gov (United States)

    2015-03-01

    A large experimental program, consisting of the design, construction, curing, exposure, and structural load : testing of 16 large-scale column specimens with a critical lap splice region that were influenced by varying : stages of alkali-silica react...

  19. Large-scale coherent structures of suspended dust concentration in the neutral atmospheric surface layer: A large-eddy simulation study

    Science.gov (United States)

    Zhang, Yangyue; Hu, Ruifeng; Zheng, Xiaojing

    2018-04-01

    Dust particles can remain suspended in the atmospheric boundary layer, motions of which are primarily determined by turbulent diffusion and gravitational settling. Little is known about the spatial organizations of suspended dust concentration and how turbulent coherent motions contribute to the vertical transport of dust particles. Numerous studies in recent years have revealed that large- and very-large-scale motions in the logarithmic region of laboratory-scale turbulent boundary layers also exist in the high Reynolds number atmospheric boundary layer, but their influence on dust transport is still unclear. In this study, numerical simulations of dust transport in a neutral atmospheric boundary layer based on an Eulerian modeling approach and large-eddy simulation technique are performed to investigate the coherent structures of dust concentration. The instantaneous fields confirm the existence of very long meandering streaks of dust concentration, with alternating high- and low-concentration regions. A strong negative correlation between the streamwise velocity and concentration and a mild positive correlation between the vertical velocity and concentration are observed. The spatial length scales and inclination angles of concentration structures are determined, compared with their flow counterparts. The conditionally averaged fields vividly depict that high- and low-concentration events are accompanied by a pair of counter-rotating quasi-streamwise vortices, with a downwash inside the low-concentration region and an upwash inside the high-concentration region. Through the quadrant analysis, it is indicated that the vertical dust transport is closely related to the large-scale roll modes, and ejections in high-concentration regions are the major mechanisms for the upward motions of dust particles.

  20. Hierarchical ZnO microspheres built by sheet-like network: Large-scale synthesis and structurally enhanced catalytic performances

    International Nuclear Information System (INIS)

    Zhu Guoxing; Liu Yuanjun; Ji Zhenyuan; Bai Song; Shen Xiaoping; Xu Zheng

    2012-01-01

    Highlights: ► Hierarchical ZnO microspheres were prepared through a facile precursor procedure in the absence of self-assembled templates, organic additives, or matrices. ► The building blocks of microspheres, sheet-like ZnO networks, are porous mesocrystal terminated with (0 1 −1 0) crystal planes. ► The hierarchical ZnO microsphere catalyst exhibits structure-induced enhancement of catalytic performance and a strong durability. - Abstract: Large-scale novel hierarchical ZnO microspheres were fabricated by a facile precursor procedure in the absence of self-assembled templates, organic additives, or matrices. A field emission scanning electron microscopy (FESEM) image reveals that the ZnO microspheres with diameter of 5–18 μm are built by sheet-like ZnO networks with average thickness of 40 nm and length of several microns. High resolution transmission electron microscopy (HRTEM) image indicates that the building blocks, sheet-like ZnO networks, are porous mesocrystal terminated with {0 1 −1 0} crystal planes. A potential application of the ZnO microspheres as a catalyst in the synthesis of 5-substituted 1H-tetrazoles was investigated. It was found that the hierarchical ZnO microsphere catalyst exhibits structure-induced enhancement of catalytic performance and a strong durability.

  1. Large-scale numerical simulations on two-phase flow behavior in a fuel bundle of RMWR with the earth simulator

    International Nuclear Information System (INIS)

    Kazuyuki, Takase; Hiroyuki, Yoshida; Hidesada, Tamai; Hajime, Akimoto; Yasuo, Ose

    2003-01-01

    Fluid flow characteristics in a fuel bundle of a reduced-moderation light water reactor (RMWR) with a tight-lattice core were analyzed numerically using a newly developed two-phase flow analysis code under the full bundle size condition. Conventional analysis methods such as sub-channel codes need composition equations based on the experimental data. In case that there are no experimental data regarding to the thermal-hydraulics in the tight-lattice core, therefore, it is difficult to obtain high prediction accuracy on the thermal design of the RMWR. Then the direct numerical simulations with the earth simulator were chosen. The axial velocity distribution in a fuel bundle changed sharply around a grid spacer and its quantitative evaluation was obtained from the present preliminary numerical study. The high prospect was acquired on the possibility of establishment of the thermal design procedure of the RMWR by large-scale direct simulations. (authors)

  2. Water uptake by and movement through a Backfilled KBS-3V deposition tunnel: results of large-scale simulations

    International Nuclear Information System (INIS)

    Dixon, D.A.; Ramqvist, G.; Jonsson, E.; Gunnarsson, D.; Hansen, J.

    2010-01-01

    Document available in extended abstract form only. Posiva and SKB initiated a joint programme BACLO (Backfilling and Closure of the Deep repository) in 2003 with the aim to develop methods and clay-based materials for backfilling the deposition tunnels of a repository utilizing the KBS-3V deposition concept. This paper summarises the results obtained in intermediate and large-scale simulations to evaluate water movement into and through backfill consisting of bentonite pellets and pre-compacted clay blocks. The main objectives of Baclo Phase III were related to examining backfill materials, deposition concepts and their importance to the clay-block and pellet backfilling concept. Bench-scale studies produced a large body of information on how various processes (e.g. water inflow, piping, erosion, self-healing, homogenisation and interaction between backfill and buffer), might affect the hydro-mechanical evolution of backfill components. The tests described in this paper examined the movement of water into and through assemblies of clay blocks and bentonite pellets/granules and represent a substantial up-scaling and inclusion of parameters that more closely simulate a field situation. In total, 27 intermediate-scale tests have been completed and 18 large-scale tests (∼ 1/2-tunnel cross-section) will be completed at SKB's Aespoe HRL by mid 2010. At intermediate-scale, point inflow rates ranging from 0.01 to 1.0 l/min were applied to block - dry pellet assemblies and water movement into and through the system was monitored. Tests determined that it is critical to provide clay blocks with lateral support and confinement as quickly as possible following block installation. Exposure of the blocks to even low rates of water ingress can result in rapid loss of block cohesion and subsequent slumping of the block materials into the spaces between the blocks and the tunnel walls. Installation of granular or pelletized bentonite clay between the blocks and the walls

  3. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    Science.gov (United States)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  4. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    Science.gov (United States)

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Advances in compact manufacturing for shape and performance controllability of large-scale components-a review

    Science.gov (United States)

    Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Ju, Li

    2017-01-01

    Research on compact manufacturing technology for shape and performance controllability of metallic components can realize the simplification and high-reliability of manufacturing process on the premise of satisfying the requirement of macro/micro-structure. It is not only the key paths in improving performance, saving material and energy, and green manufacturing of components used in major equipments, but also the challenging subjects in frontiers of advanced plastic forming. To provide a novel horizon for the manufacturing in the critical components is significant. Focused on the high-performance large-scale components such as bearing rings, flanges, railway wheels, thick-walled pipes, etc, the conventional processes and their developing situations are summarized. The existing problems including multi-pass heating, wasting material and energy, high cost and high-emission are discussed, and the present study unable to meet the manufacturing in high-quality components is also pointed out. Thus, the new techniques related to casting-rolling compound precise forming of rings, compact manufacturing for duplex-metal composite rings, compact manufacturing for railway wheels, and casting-extruding continuous forming of thick-walled pipes are introduced in detail, respectively. The corresponding research contents, such as casting ring blank, hot ring rolling, near solid-state pressure forming, hot extruding, are elaborated. Some findings in through-thickness microstructure evolution and mechanical properties are also presented. The components produced by the new techniques are mainly characterized by fine and homogeneous grains. Moreover, the possible directions for further development of those techniques are suggested. Finally, the key scientific problems are first proposed. All of these results and conclusions have reference value and guiding significance for the integrated control of shape and performance in advanced compact manufacturing.

  6. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  7. Network Dynamics with BrainX3: A Large-Scale Simulation of the Human Brain Network with Real-Time Interaction

    OpenAIRE

    Xerxes D. Arsiwalla; Riccardo eZucca; Alberto eBetella; Enrique eMartinez; David eDalmazzo; Pedro eOmedas; Gustavo eDeco; Gustavo eDeco; Paul F.M.J. Verschure; Paul F.M.J. Verschure

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  8. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction

    OpenAIRE

    Arsiwalla, Xerxes D.; Zucca, Riccardo; Betella, Alberto; Martínez, Enrique, 1961-; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F. M. J.

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  9. Large-scale atmospheric circulation biases and changes in global climate model simulations and their importance for climate change in Central Europe

    Directory of Open Access Journals (Sweden)

    A. P. van Ulden

    2006-01-01

    Full Text Available The quality of global sea level pressure patterns has been assessed for simulations by 23 coupled climate models. Most models showed high pattern correlations. With respect to the explained spatial variance, many models showed serious large-scale deficiencies, especially at mid-latitudes. Five models performed well at all latitudes and for each month of the year. Three models had a reasonable skill. We selected the five models with the best pressure patterns for a more detailed assessment of their simulations of the climate in Central Europe. We analysed observations and simulations of monthly mean geostrophic flow indices and of monthly mean temperature and precipitation. We used three geostrophic flow indices: the west component and south component of the geostrophic wind at the surface and the geostrophic vorticity. We found that circulation biases were important, and affected precipitation in particular. Apart from these circulation biases, the models showed other biases in temperature and precipitation, which were for some models larger than the circulation induced biases. For the 21st century the five models simulated quite different changes in circulation, precipitation and temperature. Precipitation changes appear to be primarily caused by circulation changes. Since the models show widely different circulation changes, especially in late summer, precipitation changes vary widely between the models as well. Some models simulate severe drying in late summer, while one model simulates significant precipitation increases in late summer. With respect to the mean temperature the circulation changes were important, but not dominant. However, changes in the distribution of monthly mean temperatures, do show large indirect influences of circulation changes. Especially in late summer, two models simulate very strong warming of warm months, which can be attributed to severe summer drying in the simulations by these models. The models differ also

  10. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Sidles, John A; Jacky, Jonathan P [Department of Orthopaedics and Sports Medicine, Box 356500, School of Medicine, University of Washington, Seattle, WA, 98195 (United States); Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M [Department of Mechanical Engineering, University of Washington, Seattle, WA 98195 (United States); Harrell, Lee E [Department of Physics, US Military Academy, West Point, NY 10996 (United States); Hero, Alfred O [Department of Electrical Engineering, University of Michigan, MI 49931 (United States); Norman, Anthony G [Department of Bioengineering, University of Washington, Seattle, WA 98195 (United States)], E-mail: sidles@u.washington.edu

    2009-06-15

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  11. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    International Nuclear Information System (INIS)

    Sidles, John A; Jacky, Jonathan P; Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M; Harrell, Lee E; Hero, Alfred O; Norman, Anthony G

    2009-01-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  12. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    Science.gov (United States)

    Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.

    2009-06-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  13. Uncertainties of Large-Scale Forcing Caused by Surface Turbulence Flux Measurements and the Impacts on Cloud Simulations at the ARM SGP Site

    Science.gov (United States)

    Tang, S.; Xie, S.; Tang, Q.; Zhang, Y.

    2017-12-01

    Two types of instruments, the eddy correlation flux measurement system (ECOR) and the energy balance Bowen ratio system (EBBR), are used at the Atmospheric Radiation Measurement (ARM) program Southern Great Plains (SGP) site to measure surface latent and sensible fluxes. ECOR and EBBR typically sample different land surface types, and the domain-mean surface fluxes derived from ECOR and EBBR are not always consistent. The uncertainties of the surface fluxes will have impacts on the derived large-scale forcing data and further affect the simulations of single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulation models (LES), especially for the shallow-cumulus clouds which are mainly driven by surface forcing. This study aims to quantify the uncertainties of the large-scale forcing caused by surface turbulence flux measurements and investigate the impacts on cloud simulations using long-term observations from the ARM SGP site.

  14. Infrastructure for Large-Scale Quality-Improvement Projects: Early Lessons from North Carolina Improving Performance in Practice

    Science.gov (United States)

    Newton, Warren P.; Lefebvre, Ann; Donahue, Katrina E.; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Introduction: Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods: Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3)…

  15. A parallel electrostatic Particle-in-Cell method on unstructured tetrahedral grids for large-scale bounded collisionless plasma simulations

    Science.gov (United States)

    Averkin, Sergey N.; Gatsonis, Nikolaos A.

    2018-06-01

    An unstructured electrostatic Particle-In-Cell (EUPIC) method is developed on arbitrary tetrahedral grids for simulation of plasmas bounded by arbitrary geometries. The electric potential in EUPIC is obtained on cell vertices from a finite volume Multi-Point Flux Approximation of Gauss' law using the indirect dual cell with Dirichlet, Neumann and external circuit boundary conditions. The resulting matrix equation for the nodal potential is solved with a restarted generalized minimal residual method (GMRES) and an ILU(0) preconditioner algorithm, parallelized using a combination of node coloring and level scheduling approaches. The electric field on vertices is obtained using the gradient theorem applied to the indirect dual cell. The algorithms for injection, particle loading, particle motion, and particle tracking are parallelized for unstructured tetrahedral grids. The algorithms for the potential solver, electric field evaluation, loading, scatter-gather algorithms are verified using analytic solutions for test cases subject to Laplace and Poisson equations. Grid sensitivity analysis examines the L2 and L∞ norms of the relative error in potential, field, and charge density as a function of edge-averaged and volume-averaged cell size. Analysis shows second order of convergence for the potential and first order of convergence for the electric field and charge density. Temporal sensitivity analysis is performed and the momentum and energy conservation properties of the particle integrators in EUPIC are examined. The effects of cell size and timestep on heating, slowing-down and the deflection times are quantified. The heating, slowing-down and the deflection times are found to be almost linearly dependent on number of particles per cell. EUPIC simulations of current collection by cylindrical Langmuir probes in collisionless plasmas show good comparison with previous experimentally validated numerical results. These simulations were also used in a parallelization

  16. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  17. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  18. Performance analysis of a large-scale helium Brayton cryo-refrigerator with static gas bearing turboexpander

    International Nuclear Information System (INIS)

    Zhang, Yu; Li, Qiang; Wu, Jihao; Li, Qing; Lu, Wenhai; Xiong, Lianyou; Liu, Liqiang; Xu, Xiangdong; Sun, Lijia; Sun, Yu; Xie, Xiujuan; Wang, Bingming; Qiu, Yinan; Zhang, Peng

    2015-01-01

    Highlights: • A 2 kW at 20.0 K helium Brayton cryo-refrigerator is built in China. • A series of tests have been systematically conducted to investigate the performance of the cryo-refrigerator. • Maximum heat conductance proportion (90.7%) appears in the heat exchangers of cold box rather than those of heat reservoirs. • A model of helium Brayton cryo-refrigerator/cycle is presented according to finite-time thermodynamics. - Abstract: Large-scale helium cryo-refrigerator is widely used in superconducting systems, nuclear fusion engineering, and scientific researches, etc., however, its energy efficiency is quite low. First, a 2 kW at 20.0 K helium Brayton cryo-refrigerator is built, and a series of tests have been systematically conducted to investigate the performance of the cryo-refrigerator. It is found that maximum heat conductance proportion (90.7%) appears in the heat exchangers of cold box rather than those of heat reservoirs, which is the main characteristic of the helium Brayton cryo-refrigerator/cycle different from the air Brayton refrigerator/cycle. Other three characteristics also lie in the configuration of refrigerant helium bypass, internal purifier and non-linearity of specific heat of helium. Second, a model of helium Brayton cryo-refrigerator/cycle is presented according to finite-time thermodynamics. The assumption named internal purification temperature depth (PTD) is introduced, and the heat capacity rate of whole cycle is divided into three different regions in accordance with the PTD: room temperature region, upper internal purification temperature region and lower one. Analytical expressions of cooling capacity and COP are obtained, and we found that the expressions are piecewise functions. Further, comparison between the model and the experimental results for cooling capacity of the helium cryo-refrigerator shows that error is less than 7.6%. The PTD not only helps to achieve the analytical formulae and indicates the working

  19. Multilevel parallel strategy on Monte Carlo particle transport for the large-scale full-core pin-by-pin simulations

    International Nuclear Information System (INIS)

    Zhang, B.; Li, G.; Wang, W.; Shangguan, D.; Deng, L.

    2015-01-01

    This paper introduces the Strategy of multilevel hybrid parallelism of JCOGIN Infrastructure on Monte Carlo Particle Transport for the large-scale full-core pin-by-pin simulations. The particle parallelism, domain decomposition parallelism and MPI/OpenMP parallelism are designed and implemented. By the testing, JMCT presents the parallel scalability of JCOGIN, which reaches the parallel efficiency 80% on 120,000 cores for the pin-by-pin computation of the BEAVRS benchmark. (author)

  20. Water surface assisted synthesis of large-scale carbon nanotube film for high-performance and stretchable supercapacitors.

    Science.gov (United States)

    Yu, Minghao; Zhang, Yangfan; Zeng, Yinxiang; Balogun, Muhammad-Sadeeq; Mai, Kancheng; Zhang, Zishou; Lu, Xihong; Tong, Yexiang

    2014-07-16

    A kind of multiwalled carbon-nanotube (MWCNT)/polydimethylsiloxane (PDMS) film with excellent conductivity and mechanical properties is developed using a facile and large-scale water surface assisted synthesis method. The film can act as a conductive support for electrochemically active PANI nano fibers. A device based on these PANI/MWCNT/PDMS electrodes shows good and stable capacitive behavior, even under static and dynamic stretching conditions. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Directory of Open Access Journals (Sweden)

    T. Wolf-Grosse

    2017-06-01

    Full Text Available Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s−1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES experiments with the Parallelised Large-Eddy Simulation Model (PALM for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a

  2. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Science.gov (United States)

    Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim

    2017-06-01

    Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water

  3. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    Science.gov (United States)

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  4. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    Science.gov (United States)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  5. Large-scale hydrological simulations using the soil water assessment tool, protocol development, and application in the danube basin.

    Science.gov (United States)

    Pagliero, Liliana; Bouraoui, Fayçal; Willems, Patrick; Diels, Jan

    2014-01-01

    The Water Framework Directive of the European Union requires member states to achieve good ecological status of all water bodies. A harmonized pan-European assessment of water resources availability and quality, as affected by various management options, is necessary for a successful implementation of European environmental legislation. In this context, we developed a methodology to predict surface water flow at the pan-European scale using available datasets. Among the hydrological models available, the Soil Water Assessment Tool was selected because its characteristics make it suitable for large-scale applications with limited data requirements. This paper presents the results for the Danube pilot basin. The Danube Basin is one of the largest European watersheds, covering approximately 803,000 km and portions of 14 countries. The modeling data used included land use and management information, a detailed soil parameters map, and high-resolution climate data. The Danube Basin was divided into 4663 subwatersheds of an average size of 179 km. A modeling protocol is proposed to cope with the problems of hydrological regionalization from gauged to ungauged watersheds and overparameterization and identifiability, which are usually present during calibration. The protocol involves a cluster analysis for the determination of hydrological regions and multiobjective calibration using a combination of manual and automated calibration. The proposed protocol was successfully implemented, with the modeled discharges capturing well the overall hydrological behavior of the basin. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. Metric matters : the performance and organisation of volumetric water control in large-scale irrigation in the North Coast of Peru

    NARCIS (Netherlands)

    Vos, J.M.C.

    2002-01-01

    This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in

  7. Physics-Based Preconditioning of a Compressible Flow Solver for Large-Scale Simulations of Additive Manufacturing Processes

    Science.gov (United States)

    Weston, Brian; Nourgaliev, Robert; Delplanque, Jean-Pierre

    2017-11-01

    We present a new block-based Schur complement preconditioner for simulating all-speed compressible flow with phase change. The conservation equations are discretized with a reconstructed Discontinuous Galerkin method and integrated in time with fully implicit time discretization schemes. The resulting set of non-linear equations is converged using a robust Newton-Krylov framework. Due to the stiffness of the underlying physics associated with stiff acoustic waves and viscous material strength effects, we solve for the primitive-variables (pressure, velocity, and temperature). To enable convergence of the highly ill-conditioned linearized systems, we develop a physics-based preconditioner, utilizing approximate block factorization techniques to reduce the fully-coupled 3×3 system to a pair of reduced 2×2 systems. We demonstrate that our preconditioned Newton-Krylov framework converges on very stiff multi-physics problems, corresponding to large CFL and Fourier numbers, with excellent algorithmic and parallel scalability. Results are shown for the classic lid-driven cavity flow problem as well as for 3D laser-induced phase change. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  8. Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems

    OpenAIRE

    Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.

    2008-01-01

    This article presents numerical recipes for simulating high-temperature and non-equilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process...

  9. Results of PMIP2 coupled simulations of the Mid-Holocene and Last Glacial Maximum – Part 1: experiments and large-scale features

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2007-06-01

    Full Text Available A set of coupled ocean-atmosphere simulations using state of the art climate models is now available for the Last Glacial Maximum and the Mid-Holocene through the second phase of the Paleoclimate Modeling Intercomparison Project (PMIP2. This study presents the large-scale features of the simulated climates and compares the new model results to those of the atmospheric models from the first phase of the PMIP, for which sea surface temperature was prescribed or computed using simple slab ocean formulations. We consider the large-scale features of the climate change, pointing out some of the major differences between the different sets of experiments. We show in particular that systematic differences between PMIP1 and PMIP2 simulations are due to the interactive ocean, such as the amplification of the African monsoon at the Mid-Holocene or the change in precipitation in mid-latitudes at the LGM. Also the PMIP2 simulations are in general in better agreement with data than PMIP1 simulations.

  10. Large scale experiments simulating hydrogen distribution in a spent fuel pool building during a hypothetical fuel uncovery accident scenario

    Energy Technology Data Exchange (ETDEWEB)

    Mignot, Guillaume; Paranjape, Sidharth; Paladino, Domenico; Jaeckel, Bernd; Rydl, Adolf [Paul Scherrer Institute, Villigen (Switzerland)

    2016-08-15

    Following the Fukushima accident and its extended station blackout, attention was brought to the importance of the spent fuel pools' (SFPs) behavior in case of a prolonged loss of the cooling system. Since then, many analytical works have been performed to estimate the timing of hypothetical fuel uncovery for various SFP types. Experimentally, however, little was done to investigate issues related to the formation of a flammable gas mixture, distribution, and stratification in the SFP building itself and to some extent assess the capability for the code to correctly predict it. This paper presents the main outcomes of the Experiments on Spent Fuel Pool (ESFP) project carried out under the auspices of Swissnuclear (Framework 2012–2013) in the PANDA facility at the Paul Scherrer Institut in Switzerland. It consists of an experimental investigation focused on hydrogen concentration build-up into a SFP building during a predefined scaled scenario for different venting positions. Tests follow a two-phase scenario. Initially steam is released to mimic the boiling of the pool followed by a helium/steam mixture release to simulate the deterioration of the oxidizing spent fuel. Results shows that while the SFP building would mainly be inerted by the presence of a high concentration of steam, the volume located below the level of the pool in adjacent rooms would maintain a high air content. The interface of the two-gas mixture presents the highest risk of flammability. Additionally, it was observed that the gas mixture could become stagnant leading locally to high hydrogen concentration while steam condenses. Overall, the experiments provide relevant information for the potentially hazardous gas distribution formed in the SFP building and hints on accident management and on eventual retrofitting measures to be implemented in the SFP building.

  11. A NeISS collaboration to develop and use e-infrastructure for large-scale social simulation

    OpenAIRE

    Doherty, Thomas; Skipsey, Samuel; Turner, Andy; Watt, John

    2011-01-01

    The National e-Infrastructure for Social Simulation (NeISS) project is focused on\\ud developing e-Infrastructure to support social simulation research. Part of NeISS aims to\\ud provide an interface for running contemporary dynamic demographic social simulation\\ud models as developed in the GENESIS project. These GENESIS models operate at the\\ud individual person level and are stochastic. This paper focuses on support for a simplistic\\ud demographic change model that has a daily time steps, an...

  12. Advanced Simulation Tool for Improved Damage Assessment 2) Water-Mist Suppression of Large Scale Compartment Fires

    National Research Council Canada - National Science Library

    Prasad, Kuldeep

    2000-01-01

    .... In the first report, we adopted a domain decomposition approach, based on the multiblock Chimera technique, to simulate fires in single uncluttered compartments and predicted spread of smoke in multi...

  13. Numerical simulation for excavation and long-term behavior of large-scale cavern in soft rock

    International Nuclear Information System (INIS)

    Sawada, Masataka; Okada, Tetsuji

    2010-01-01

    Low-level radioactive waste is planned to be disposed at the depth of more than 50 m in Neogene tuff or tuffaceous sandstone. Generally there are few cracks in sedimentary soft rocks, thus it is considered to be easier to determine permeability of soft rocks than that of discontinuous rocks. On the other hand, sedimentary soft rocks show strong time-dependent behavior, and they are more sensitive to heat, groundwater, and their chemical effect. Numerical method for long-term behavior of underground facilities is necessary to their design and safety assessment. Numerical simulations for excavation of test cavern in disposal site are described in this report. Our creep model was applied to these simulations. Although it is able to reproduce the behavior of soft rock observed in laboratory creep test, simulation using parameters obtained from laboratory tests predicts much larger displacement than that of measurement. Simulation using parameters modified based on in-situ elastic wave measurement and back analysis reproduces measured displacements very well. Behavior of the surrounding rock mass during resaturation after setting of the waste and the engineered barrier system is also simulated. We have a plan to investigate chemical and mechanical interaction among soft rock, tunnel supports and engineered barriers, and to make their numerical models. (author)

  14. Coupled large-eddy simulation and morphodynamics of a large-scale river under extreme flood conditions

    Science.gov (United States)

    Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team

    2016-11-01

    We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.

  15. Metric matters : the performance and organisation of volumetric water control in large-scale irrigation in the North Coast of Peru

    OpenAIRE

    Vos, J.M.C.

    2002-01-01

    This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in the coast of Peru is considered to be badly managed, however this study shows that performance is more optimal than critics assume. Apart from the relevance in the local water management discussion,...

  16. Large-scale grid-enabled lattice-Boltzmann simulations of complex fluid flow in porous media and under shear

    NARCIS (Netherlands)

    Harting, J.D.R.; Venturoli, M.; Coveney, P.V.

    2004-01-01

    Well–designed lattice Boltzmann codes exploit the essentially embarrassingly parallel features of the algorithm and so can be run with considerable efficiency on modern supercomputers. Such scalable codes permit us to simulate the behaviour of increasingly large quantities of complex condensed

  17. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  18. Energetic and Economic Assessment of Pipe Network Effects on Unused Energy Source System Performance in Large-Scale Horticulture Facilities

    Directory of Open Access Journals (Sweden)

    Jae Ho Lee

    2015-04-01

    Full Text Available As the use of fossil fuel has increased, not only in construction, but also in agriculture due to the drastic industrial development in recent times, the problems of heating costs and global warming are getting worse. Therefore, the introduction of more reliable and environmentally-friendly alternative energy sources has become urgent and the same trend is found in large-scale horticulture facilities. In this study, among many alternative energy sources, we investigated the reserves and the potential of various different unused energy sources which have infinite potential, but are nowadays wasted due to limitations in their utilization. This study investigated the effects of the distance between the greenhouse and the actual heat source by taking into account the heat transfer taking place inside the pipe network. This study considered CO2 emissions and economic aspects to determine the optimal heat source. Payback period analysis against initial investment cost shows that a heat pump based on a power plant’s waste heat has the shortest payback period of 7.69 years at a distance of 0 km. On the other hand, the payback period of a heat pump based on geothermal heat showed the shortest payback period of 10.17 year at the distance of 5 km, indicating that heat pumps utilizing geothermal heat were the most effective model if the heat transfer inside the pipe network between the greenhouse and the actual heat source is taken into account.

  19. Large-scale atomistic and quantum-mechanical simulations of a Nafion membrane: Morphology, proton solvation and charge transport

    Directory of Open Access Journals (Sweden)

    Pavel V. Komarov

    2013-09-01

    Full Text Available Atomistic and first-principles molecular dynamics simulations are employed to investigate the structure formation in a hydrated Nafion membrane and the solvation and transport of protons in the water channel of the membrane. For the water/Nafion systems containing more than 4 million atoms, it is found that the observed microphase-segregated morphology can be classified as bicontinuous: both majority (hydrophobic and minority (hydrophilic subphases are 3D continuous and organized in an irregular ordered pattern, which is largely similar to that known for a bicontinuous double-diamond structure. The characteristic size of the connected hydrophilic channels is about 25–50 Å, depending on the water content. A thermodynamic decomposition of the potential of mean force and the calculated spectral densities of the hindered translational motions of cations reveal that ion association observed with decreasing temperature is largely an entropic effect related to the loss of low-frequency modes. Based on the results from the atomistic simulation of the morphology of Nafion, we developed a realistic model of ion-conducting hydrophilic channel within the Nafion membrane and studied it with quantum molecular dynamics. The extensive 120 ps-long density functional theory (DFT-based simulations of charge migration in the 1200-atom model of the nanochannel consisting of Nafion chains and water molecules allowed us to observe the bimodality of the van Hove autocorrelation function, which provides the direct evidence of the Grotthuss bond-exchange (hopping mechanism as a significant contributor to the proton conductivity.

  20. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  1. Performance assessment of mass flow rate measurement capability in a large scale transient two-phase flow test system

    International Nuclear Information System (INIS)

    Nalezny, C.L.; Chapman, R.L.; Martinell, J.S.; Riordon, R.P.; Solbrig, C.W.

    1979-01-01

    Mass flow is an important measured variable in the Loss-of-Fluid Test (LOFT) Program. Large uncertainties in mass flow measurements in the LOFT piping during LOFT coolant experiments requires instrument testing in a transient two-phase flow loop that simulates the geometry of the LOFT piping. To satisfy this need, a transient two-phase flow loop has been designed and built. The load cell weighing system, which provides reference mass flow measurements, has been analyzed to assess its capability to provide the measurements. The analysis consisted of first performing a thermal-hydraulic analysis using RELAP4 to compute mass inventory and pressure fluctuations in the system and mass flow rate at the instrument location. RELAP4 output was used as input to a structural analysis code SAPIV which is used to determine load cell response. The computed load cell response was then smoothed and differentiated to compute mass flow rate from the system. Comparison between computed mass flow rate at the instrument location and mass flow rate from the system computed from the load cell output was used to evaluate mass flow measurement capability of the load cell weighing system. Results of the analysis indicate that the load cell weighing system will provide reference mass flows more accurately than the instruments now in LOFT

  2. The Neurona at Home project: Simulating a large-scale cellular automata brain in a distributed computing environment

    Science.gov (United States)

    Acedo, L.; Villanueva-Oller, J.; Moraño, J. A.; Villanueva, R.-J.

    2013-01-01

    The Berkeley Open Infrastructure for Network Computing (BOINC) has become the standard open source solution for grid computing in the Internet. Volunteers use their computers to complete an small part of the task assigned by a dedicated server. We have developed a BOINC project called Neurona@Home whose objective is to simulate a cellular automata random network with, at least, one million neurons. We consider a cellular automata version of the integrate-and-fire model in which excitatory and inhibitory nodes can activate or deactivate neighbor nodes according to a set of probabilistic rules. Our aim is to determine the phase diagram of the model and its behaviour and to compare it with the electroencephalographic signals measured in real brains.

  3. Quantifying Hyporheic Exchanges in a Large Scale River Reach Using Coupled 3-D Surface and Subsurface Computational Fluid Dynamics Simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Bao, J; Huang, M; Hou, Z; Perkins, W; Harding, S; Titzler, S; Ren, H; Thorne, P; Suffield, S; Murray, C; Zachara, J

    2017-03-01

    Hyporheic exchange is a critical mechanism shaping hydrological and biogeochemical processes along a river corridor. Recent studies on quantifying the hyporheic exchange were mostly limited to local scales due to field inaccessibility, computational demand, and complexity of geomorphology and subsurface geology. Surface flow conditions and subsurface physical properties are well known factors on modulating the hyporheic exchange, but quantitative understanding of their impacts on the strength and direction of hyporheic exchanges at reach scales is absent. In this study, a high resolution computational fluid dynamics (CFD) model that couples surface and subsurface flow and transport is employed to simulate hyporheic exchanges in a 7-km long reach along the main-stem of the Columbia River. Assuming that the hyporheic exchange does not affect surface water flow conditions due to its negligible magnitude compared to the volume and velocity of river water, we developed a one-way coupled surface and subsurface water flow model using the commercial CFD software STAR-CCM+. The model integrates the Reynolds-averaged Navier-Stokes (RANS) equation solver with a realizable κ-ε two-layer turbulence model, a two-layer all y+ wall treatment, and the volume of fluid (VOF) method, and is used to simulate hyporheic exchanges by tracking the free water-air interface as well as flow in the river and the subsurface porous media. The model is validated against measurements from acoustic Doppler current profiler (ADCP) in the stream water and hyporheic fluxes derived from a set of temperature profilers installed across the riverbed. The validated model is then employed to systematically investigate how hyporheic exchanges are influenced by surface water fluid dynamics strongly regulated by upstream dam operations, as well as subsurface structures (e.g. thickness of riverbed and subsurface formation layers) and hydrogeological properties (e.g. permeability). The results

  4. Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov–Maxwell equations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao

    2015-12-14

    Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.

  5. Simulation on a proposed large-scale liquid hydrogen plant using a multi-component refrigerant refrigeration system

    Energy Technology Data Exchange (ETDEWEB)

    Krasae-in, Songwut [Norwegian University of Science and Technology, Kolbjorn Hejes vei 1d, NO-7491 Trondheim (Norway); Stang, Jacob H.; Neksa, Petter [SINTEF Energy Research AS, Kolbjorn Hejes vei 1d, NO-7465 Trondheim (Norway)

    2010-11-15

    A proposed liquid hydrogen plant using a multi-component refrigerant (MR) refrigeration system is explained in this paper. A cycle that is capable of producing 100 tons of liquid hydrogen per day is simulated. The MR system can be used to cool feed normal hydrogen gas from 25 C to the equilibrium temperature of -193 C with a high efficiency. In addition, for the transition from the equilibrium temperature of the hydrogen gas from -193 C to -253 C, the new proposed four H{sub 2} Joule-Brayton cascade refrigeration system is recommended. The overall power consumption of the proposed plant is 5.35 kWh/kg{sub LH2}, with an ideal minimum of 2.89 kWh/kg{sub LH2}. The current plant in Ingolstadt is used as a reference, which has an energy consumption of 13.58 kWh/kg{sub LH2} and an efficiency of 21.28%: the efficiency of the proposed system is 54.02% or more, where this depends on the assumed efficiency values for the compressors and expanders. Moreover, the proposed system has some smaller-size heat exchangers, much smaller compressor motors, and smaller crankcase compressors. Thus, it could represent a plant with the lowest construction cost with respect to the amount of liquid hydrogen produced in comparison to today's plants, e.g., in Ingolstadt and Leuna. Therefore, the proposed system has many improvements that serve as an example for future hydrogen liquefaction plants. (author)

  6. Study on large scale knowledge base with real time operation for autonomous nuclear power plant. 1. Basic concept and expecting performance

    International Nuclear Information System (INIS)

    Ozaki, Yoshihiko; Suda, Kazunori; Yoshikawa, Shinji; Ozawa, Kenji

    1996-04-01

    Since it is desired to enhance availability and safety of nuclear power plants operation and maintenance by removing human factor, there are many researches and developments for intelligent operation or diagnosis using artificial intelligence (AI) technique. We have been developing an autonomous operation and maintenance system for nuclear power plants by substituting AI's and intelligent robots. It is indispensable to use various and large scale knowledge relative to plant design, operation, and maintenance, that is, whole life cycle data of the plant for the autonomous nuclear power plant. These knowledge must be given to AI system or intelligent robots adequately and opportunely. Moreover, it is necessary to insure real time operation using the large scale knowledge base for plant control and diagnosis performance. We have been studying on the large scale and real time knowledge base system for autonomous plant. In the report, we would like to present the basic concept and expecting performance of the knowledge base for autonomous plant, especially, autonomous control and diagnosis system. (author)

  7. submitter Optimizing the data-collection time of a large-scale data-acquisition system through a simulation framework

    CERN Document Server

    Colombo, Tommaso; Garcìa, Pedro Javier; Vandelli, Wainer

    2016-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. The main performance issues brought about by this workload are addressed in this paper, focusing in particular on the so-called TCP incast pathology. Since performing systematic stud...

  8. Large scale simulations of the mechanical properties of layered transition metal ternary compounds for fossil energy power system applications

    Energy Technology Data Exchange (ETDEWEB)

    Ching, Wai-Yim [Univ. of Missouri, Kansas City, MO (United States)

    2014-12-31

    Advanced materials with applications in extreme conditions such as high temperature, high pressure, and corrosive environments play a critical role in the development of new technologies to significantly improve the performance of different types of power plants. Materials that are currently employed in fossil energy conversion systems are typically the Ni-based alloys and stainless steels that have already reached their ultimate performance limits. Incremental improvements are unlikely to meet the more stringent requirements aimed at increased efficiency and reduce risks while addressing environmental concerns and keeping costs low. Computational studies can lead the way in the search for novel materials or for significant improvements in existing materials that can meet such requirements. Detailed computational studies with sufficient predictive power can provide an atomistic level understanding of the key characteristics that lead to desirable properties. This project focuses on the comprehensive study of a new class of materials called MAX phases, or Mn+1AXn (M = a transition metal, A = Al or other group III, IV, and V elements, X = C or N). The MAX phases are layered transition metal carbides or nitrides with a rare combination of metallic and ceramic properties. Due to their unique structural arrangements and special types of bonding, these thermodynamically stable alloys possess some of the most outstanding properties. We used a genomic approach in screening a large number of potential MAX phases and established a database for 665 viable MAX compounds on the structure, mechanical and electronic properties and investigated the correlations between them. This database if then used as a tool for materials informatics for further exploration of this class of intermetallic compounds.

  9. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  10. Large-Scale Evaluation of Quality of Care in 6 Countries of Eastern Europe and Central Asia Using Clinical Performance and Value Vignettes.

    Science.gov (United States)

    Peabody, John W; DeMaria, Lisa; Smith, Owen; Hoth, Angela; Dragoti, Edmond; Luck, Jeff

    2017-09-27

    challenged by poor performance as measured by clinical care vignettes, but there is potential for provision of high-quality care by a sizable proportion of providers. Large-scale assessments of quality of care have been hampered by the lack of effective measurement tools that provide generalizable and reliable results across diverse economic, cultural, and social settings. The feasibility of quality measurement using CPV vignettes in these 6 countries and the ability to combine results with individual feedback could significantly enhance strategies to improve quality of care, and ultimately population health. © Peabody et al.

  11. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  12. Coding task performance in early adolescence: A large-scale controlled study into boy-girl differences

    Directory of Open Access Journals (Sweden)

    Sanne eDekker

    2013-08-01

    Full Text Available This study examined differences between boys and girls regarding efficiency of information processing in early adolescence. 306 healthy adolescents (50.3% boys in grade 7 and 9 (aged 13 and 15 respectively performed a coding task based on over-learned symbols. An age effect was revealed as subjects in grade 9 performed better than subjects in grade 7. Main effects for sex were found in the advantage of girls. The 25% best-performing students comprised twice as many girls as boys. The opposite pattern was found for the worst performing 25%. In addition, a main effect was found for educational track in favor of the highest track. No interaction effects were found. School grades did not explain additional variance in LDST performance. This indicates that cognitive performance is relatively independent from school performance. Student characteristics like age, sex and education level were more important for efficiency of information processing than school performance. The findings imply that after age 13, efficiency of information processing is still developing and that girls outperform boys in this respect. The findings provide new information on the mechanisms underlying boy-girl differences in scholastic performance.

  13. First-principles studies on vacancy-modified interstitial diffusion mechanism of oxygen in nickel, associated with large-scale atomic simulation techniques

    International Nuclear Information System (INIS)

    Fang, H. Z.; Shang, S. L.; Wang, Y.; Liu, Z. K.; Alfonso, D.; Alman, D. E.; Shin, Y. K.; Zou, C. Y.; Duin, A. C. T. van; Lei, Y. K.; Wang, G. F.

    2014-01-01

    This paper is concerned with the prediction of oxygen diffusivities in fcc nickel from first-principles calculations and large-scale atomic simulations. Considering only the interstitial octahedral to tetrahedral to octahedral minimum energy pathway for oxygen diffusion in fcc lattice, greatly underestimates the migration barrier and overestimates the diffusivities by several orders of magnitude. The results indicate that vacancies in the Ni-lattice significantly impact the migration barrier of oxygen in nickel. Incorporation of the effect of vacancies results in predicted diffusivities consistent with available experimental data. First-principles calculations show that at high temperatures the vacancy concentration is comparable to the oxygen solubility, and there is a strong binding energy and a redistribution of charge density between the oxygen atom and vacancy. Consequently, there is a strong attraction between the oxygen and vacancy in the Ni lattice, which impacts diffusion

  14. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  15. A large-scale examination of the effectiveness of anonymous marking in reducing group performance differences in higher education assessment.

    Directory of Open Access Journals (Sweden)

    Daniel P Hinton

    Full Text Available The present research aims to more fully explore the issues of performance differences in higher education assessment, particularly in the context of a common measure taken to address them. The rationale for the study is that, while performance differences in written examinations are relatively well researched, few studies have examined the efficacy of anonymous marking in reducing these performance differences, particularly in modern student populations. By examining a large archive (N = 30674 of assessment data spanning a twelve-year period, the relationship between assessment marks and factors such as ethnic group, gender and socio-environmental background was investigated. In particular, analysis focused on the impact that the implementation of anonymous marking for assessment of written examinations and coursework has had on the magnitude of mean score differences between demographic groups of students. While group differences were found to be pervasive in higher education assessment, these differences were observed to be relatively small in practical terms. Further, it appears that the introduction of anonymous marking has had a negligible effect in reducing them. The implications of these results are discussed, focusing on two issues, firstly a defence of examinations as a fair and legitimate form of assessment in Higher Education, and, secondly, a call for the re-examination of the efficacy of anonymous marking in reducing group performance differences.

  16. A large-scale examination of the effectiveness of anonymous marking in reducing group performance differences in higher education assessment.

    Science.gov (United States)

    Hinton, Daniel P; Higson, Helen

    2017-01-01

    The present research aims to more fully explore the issues of performance differences in higher education assessment, particularly in the context of a common measure taken to address them. The rationale for the study is that, while performance differences in written examinations are relatively well researched, few studies have examined the efficacy of anonymous marking in reducing these performance differences, particularly in modern student populations. By examining a large archive (N = 30674) of assessment data spanning a twelve-year period, the relationship between assessment marks and factors such as ethnic group, gender and socio-environmental background was investigated. In particular, analysis focused on the impact that the implementation of anonymous marking for assessment of written examinations and coursework has had on the magnitude of mean score differences between demographic groups of students. While group differences were found to be pervasive in higher education assessment, these differences were observed to be relatively small in practical terms. Further, it appears that the introduction of anonymous marking has had a negligible effect in reducing them. The implications of these results are discussed, focusing on two issues, firstly a defence of examinations as a fair and legitimate form of assessment in Higher Education, and, secondly, a call for the re-examination of the efficacy of anonymous marking in reducing group performance differences.

  17. Self-beliefs mediate mathematical performance between primary and lower secondary school: A large scale longitudinal cohort study

    NARCIS (Netherlands)

    Reed, Helen; Kirschner, Paul A.; Jolles, Jelle

    2016-01-01

    It is often argued that enhancement of self-beliefs should be one of the key goals ofeducation. However, very little is known about the relation between self-beliefs and performance when students move from primary to secondary school in highly differentiated educational systems with early tracking.

  18. The LHC Cryomagnet Supports in Glass-Fiber Reinforced Epoxy A Large Scale Industrial Production with High Reproducibility in Performance

    CERN Document Server

    Poncet, A; Trigo, J; Parma, V

    2008-01-01

    The about 1700 LHC main ring super-conducting magnets are supported within their cryostats on 4700 low heat in leak column-type supports. The supports were designed to ensure a precise and stable positioning of the heavy dipole and quadrupole magnets while keeping thermal conduction heat loads within budget. A trade-off between mechanical and thermal properties, as well as cost considerations, led to the choice of glass fibre reinforced epoxy (GFRE). Resin Transfer Moulding (RTM), featuring a high level of automation and control, was the manufacturing process retained to ensure the reproducibility of the performance of the supports throughout the large production. The Spanish aerospace company EADS-CASA Espacio developed the specific RTM process, and produced the total quantity of supports between 2001 and 2004. This paper describes the development and the production of the supports, and presents the production experience and the achieved performance.

  19. THE LHC CRYOMAGNET SUPPORTS IN GLASS-FIBER REINFORCED EPOXY: A LARGE SCALE INDUSTRIAL PRODUCTION WITH HIGH REPRODUCIBILITY IN PERFORMANCE

    International Nuclear Information System (INIS)

    Poncet, A.; Struik, M.; Parma, V.; Trigo, J.

    2008-01-01

    The about 1700 LHC main ring super-conducting magnets are supported within their cryostats on 4700 low heat in leak column-type supports. The supports were designed to ensure a precise and stable positioning of the heavy dipole and quadrupole magnets while keeping thermal conduction heat loads within budget. A trade-off between mechanical and thermal properties, as well as cost considerations, led to the choice of glass fibre reinforced epoxy (GFRE). Resin Transfer Moulding (RTM), featuring a high level of automation and control, was the manufacturing process retained to ensure the reproducibility of the performance of the supports throughout the large production.The Spanish aerospace company EADS-CASA Espacio developed the specific RTM process, and produced the total quantity of supports between 2001 and 2004.This paper describes the development and the production of the supports, and presents the production experience and the achieved performance

  20. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  1. Probing the Mechanism of pH-Induced Large-Scale Conformational Changes in Dengue Virus Envelope Protein Using Atomistic Simulations

    Science.gov (United States)

    Prakash, Meher K.; Barducci, Alessandro; Parrinello, Michele

    2010-01-01

    Abstract One of the key steps in the infection of the cell by dengue virus is a pH-induced conformational change of the viral envelope proteins. These envelope proteins undergo a rearrangement from a dimer to a trimer, with large conformational changes in the monomeric unit. In this article, metadynamics simulations were used to enable us to understand the mechanism of these large-scale changes in the monomer. By using all-atom, explicit solvent simulations of the monomers, the stability of the protein structure is studied under low and high pH conditions. Free energy profiles obtained along appropriate collective coordinates demonstrate that pH affects the domain interface in both the conformations of E monomer, stabilizing one and destabilizing the other. These simulations suggest a mechanism with an intermediate detached state between the two monomeric structures. Using further analysis, we comment on the key residue interactions responsible for the instability and the pH-sensing role of a histidine that could not otherwise be studied experimentally. The insights gained from this study and methodology can be extended for studying similar mechanisms in the E proteins of the other members of class II flavivirus family. PMID:20643078

  2. Nitrogen-Related Constraints of Carbon Uptake by Large-Scale Forest Expansion: Simulation Study for Climate Change and Management Scenarios

    Science.gov (United States)

    Kracher, Daniela

    2017-11-01

    Increase of forest areas has the potential to increase the terrestrial carbon (C) sink. However, the efficiency for C sequestration depends on the availability of nutrients such as nitrogen (N), which is affected by climatic conditions and management practices. In this study, I analyze how N limitation affects C sequestration of afforestation and how it is influenced by individual climate variables, increased harvest, and fertilizer application. To this end, JSBACH, the land component of the Earth system model of the Max Planck Institute for Meteorology is applied in idealized simulation experiments. In those simulations, large-scale afforestation increases the terrestrial C sink in the 21st century by around 100 Pg C compared to a business as usual land-use scenario. N limitation reduces C sequestration roughly by the same amount. The relevance of compensating effects of uptake and release of carbon dioxide by plant productivity and soil decomposition, respectively, gets obvious from the simulations. N limitation of both fluxes compensates particularly in the tropics. Increased mineralization under global warming triggers forest expansion, which otherwise is restricted by N availability. Due to compensating higher plant productivity and soil respiration, the global net effect of warming for C sequestration is however rather small. Fertilizer application and increased harvest enhance C sequestration as well as boreal expansion. The additional C sequestration achieved by fertilizer application is offset to a large part by additional emissions of nitrous oxide.

  3. High performance architecture design for large scale fibre-optic sensor arrays using distributed EDFAs and hybrid TDM/DWDM

    Science.gov (United States)

    Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.

    2013-09-01

    A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.

  4. Dynamic performance investigation of once-through-type steam generator for NPP using a large-scale model

    International Nuclear Information System (INIS)

    Kats, F.M.; Ostrovskij, L.A.; Ehskin, N.B.

    1985-01-01

    An experimental bench is described as well as the results of dynamic performance investigation of mass- and heat transfer of the once-through type steam generator for the NPP with weak superheat. Coolant for the primary and secondary circuit is water. Under investigation conditions the possibility of changin.o primary and secondary circuit temperatures has been supphed as well as the primary circuit flow rate and the secondary circuit pressure changes. Transients for differen.t operating conditions are considered. The possibility for construction of the steam generator automatic control system is based

  5. History of large scale maintenance operations performed by EDF on the steam generators of its nuclear power plants

    International Nuclear Information System (INIS)

    2010-01-01

    After a first part which describes the role of steam generators in nuclear reactors, highlights their importance for the reactor safety, and briefly presents their maintenance, this report describes the new types of degradation which have been observed, and their processing. Thus, it describes the clogging on cracked tubes, comments its impact on safety, describes the available control means, and discusses the use of chemical cleaning and the on-going work on this topic. It discusses the risk of fatigue cracking of tubes in abnormal support position. It comments the holding in position of stoppers used during maintenance. It describes and discusses the corrosion phenomena, and the performed and requested corrective actions

  6. Diagnostic evaluation of the Community Earth System Model in simulating mineral dust emission with insight into large-scale dust storm mobilization in the Middle East and North Africa (MENA)

    Science.gov (United States)

    Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.

    2016-06-01

    Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.

  7. Performance of the IAEA transport regulations in controlling doses and risks from a large-scale radioactive waste transport system

    International Nuclear Information System (INIS)

    Hutchinson, D.; Miles, R.; White, I.

    2004-01-01

    The role of United Kingdom Nirex Limited is to provide the UK with safe, environmentally sound and publicly acceptable options for the long-term management of radioactive materials generated by the UK's commercial, medical, research and defence activities. An important part of this role is to set standards and specifications for waste packaging. Waste producers in the UK are currently developing processes for packaging many different types of intermediatelevel waste (ILW), and also those forms of low-level waste that will require similar management to ILW. When packaging processes are at the proposal stage, the waste producers consult Nirex about the suitability of the resulting packages for all future aspects of waste management. The response that Nirex provides is based on detailed assessments of the proposed packages, their compliance with Nirex standards and specifications, and their predicted performance through the successive phases of waste management. One of those phases is transport through the public domain. This paper draws on experience gained from more than 200 separate transport safety assessments, which have cumulatively covered a wide range of waste types, waste packages and transport packages

  8. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  9. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    Directory of Open Access Journals (Sweden)

    Parichit Sharma

    Full Text Available The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture

  10. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    Science.gov (United States)

    Sharma, Parichit; Mantri, Shrikant S

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design

  11. The ion population of the magnetotail during the 17 April 2002 magnetic storm: Large-scale kinetic simulations and IMAGE/HENA observations

    Science.gov (United States)

    Peroomian, Vahé; El-Alaoui, Mostafa; Brandt, Pontus C.:son

    2011-05-01

    The contribution of solar wind and ionospheric ions to the ion population of the magnetotail during the 17 April 2002 geomagnetic storm was investigated by using large-scale kinetic (LSK) particle tracing calculations. We began our investigation by carrying out a global magnetohydrodynamic simulation of the storm event by using upstream solar wind and interplanetary magnetic field data from the ACE spacecraft. We launched solar wind H+ ions and ionospheric O+ ions beginning at 0900 UT, ˜2 h prior to the sudden storm commencement (SSC), until 2000 UT. We found that during this Dst ˜ -98 nT storm, solar wind ions carried the bulk of the density and energy density in the nightside ring current and plasma sheet, with the notable exception of the 90 min immediately after the SSC when O+ densities in the ring current exceeded those of H+ ions. The LSK simulation did a very good job of reproducing ion densities observed by the Los Alamos National Laboratory spacecraft at geosynchronous orbit and reproduced the changes in the inner magnetosphere and the injection of ions observed by the IMAGE spacecraft during a substorm that occurred at 1900 UT. These comparisons with observations serve to validate our results throughout the magnetotail and allow us to obtain time-dependent maps of H+ and O+ density and energy density where IMAGE cannot make measurements. In essence, this work extends the viewing window of the IMAGE spacecraft far downtail.

  12. Variations in large-scale tropical cyclone genesis factors over the western North Pacific in the PMIP3 last millennium simulations

    Science.gov (United States)

    Yan, Qing; Wei, Ting; Zhang, Zhongshi

    2017-02-01

    Investigation of past tropical cyclone (TC) activity in the Western North Pacific (WNP) is potentially helpful to enable better understanding of future TC behaviors. In this study, we examine variations in large-scale environmental factors important to TC genesis in the last millennium simulations from the Paleoclimate Modelling Intercomparison Project Phase 3 (PMIP3). The results show that potential intensity, a theoretical prediction of the maximum TC intensity, is increased relative to the last millennium in the north part of the WNP in the Medieval Climate Anomaly (MCA; 950-1200 AD) while it is decreased in the Little Ice Age (LIA; 1600-1850 AD). Vertical wind shear that generally inhibits TC genesis is enhanced (reduced) to the south of 20°N and is reduced (enhanced) to the north in the MCA (LIA). Relative humidity (at 600 hPa) that measures the mid-tropospheric moisture content broadly shows an increase (decrease) in the MCA (LIA). A genesis potential index indicates that conditions are generally favorable (unfavorable) for TC formation in the WNP in the MCA (LIA), especially in the northern part. Taking changes in steering flows into account, there may be an increasing (decreasing) favorability for storm strikes in East Asia in the MCA (LIA). The estimated TC activity is consistent with the geological proxies in Japan, but contradicts with the typhoon records in southern China and Taiwan. This model-data discrepancy is attributed to the limitations in both simulations and reconstructions.

  13. Surface flux transport simulations: Effect of inflows toward active regions and random velocities on the evolution of the Sun's large-scale magnetic field

    Science.gov (United States)

    Martin-Belda, D.; Cameron, R. H.

    2016-02-01

    Aims: We aim to determine the effect of converging flows on the evolution of a bipolar magnetic region (BMR), and to investigate the role of these inflows in the generation of poloidal flux. We also discuss whether the flux dispersal due to turbulent flows can be described as a diffusion process. Methods: We developed a simple surface flux transport model based on point-like magnetic concentrations. We tracked the tilt angle, the magnetic flux and the axial dipole moment of a BMR in simulations with and without inflows and compared the results. To test the diffusion approximation, simulations of random walk dispersal of magnetic features were compared against the predictions of the diffusion treatment. Results: We confirm the validity of the diffusion approximation to describe flux dispersal on large scales. We find that the inflows enhance flux cancellation, but at the same time affect the latitudinal separation of the polarities of the bipolar region. In most cases the latitudinal separation is limited by the inflows, resulting in a reduction of the axial dipole moment of the BMR. However, when the initial tilt angle of the BMR is small, the inflows produce an increase in latitudinal separation that leads to an increase in the axial dipole moment in spite of the enhanced flux destruction. This can give rise to a tilt of the BMR even when the BMR was originally aligned parallel to the equator.

  14. Network dynamics with BrainX(3): a large-scale simulation of the human brain network with real-time interaction.

    Science.gov (United States)

    Arsiwalla, Xerxes D; Zucca, Riccardo; Betella, Alberto; Martinez, Enrique; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F M J

    2015-01-01

    BrainX(3) is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX(3) in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX(3) can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.

  15. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction

    Science.gov (United States)

    Arsiwalla, Xerxes D.; Zucca, Riccardo; Betella, Alberto; Martinez, Enrique; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F. M. J.

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas. PMID:25759649

  16. Network Dynamics with BrainX3: A Large-Scale Simulation of the Human Brain Network with Real-Time Interaction

    Directory of Open Access Journals (Sweden)

    Xerxes D. Arsiwalla

    2015-02-01

    Full Text Available BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for real-time exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably, due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.

  17. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization

    Directory of Open Access Journals (Sweden)

    Andrea Caliciotti

    2018-04-01

    Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].

  18. Energy performance strategies for the large scale introduction of geothermal energy in residential and industrial buildings: The GEO.POWER project

    International Nuclear Information System (INIS)

    Giambastiani, B.M.S.; Tinti, F.; Mendrinos, D.; Mastrocicco, M.

    2014-01-01

    Use of shallow geothermal energy, in terms of ground coupled heat pumps (GCHP) for heating and cooling purposes, is an environmentally-friendly and cost-effective alternative with potential to replace fossil fuels and help mitigate global warming. Focusing on the recent results of the GEO.POWER project, this paper aims at examining the energy performance strategies and the future regional and national financial instruments for large scale introduction of geothermal energy and GCHP systems in both residential and industrial buildings. After a transferability assessment to evaluate the reproducibility of some outstanding examples of systems currently existing in Europe for the utilisation of shallow geothermal energy, a set of regulatory, economic and technical actions is proposed to encourage the GCHP market development and support geothermal energy investments in the frame of the existing European normative platforms. This analysis shows that many European markets are changing from a new GCHP market to growth market. However some interventions are still required, such as incentives, regulatory framework, certification schemes and training activities in order to accelerate the market uptake and achieve the main European energy and climate targets. - Highlights: • Potentiality of geothermal applications for heating and cooling in buildings. • Description of the GEO.POWER project and its results. • Local strategies for the large scale introduction of GCHPs

  19. Impact of air-sea drag coefficient for latent heat flux on large scale climate in coupled and atmosphere stand-alone simulations

    Science.gov (United States)

    Torres, Olivier; Braconnot, Pascale; Marti, Olivier; Gential, Luc

    2018-05-01

    The turbulent fluxes across the ocean/atmosphere interface represent one of the principal driving forces of the global atmospheric and oceanic circulation. Despite decades of effort and improvements, representation of these fluxes still presents a challenge due to the small-scale acting turbulent processes compared to the resolved scales of the models. Beyond this subgrid parameterization issue, a comprehensive understanding of the impact of air-sea interactions on the climate system is still lacking. In this paper we investigates the large-scale impacts of the transfer coefficient used to compute turbulent heat fluxes with the IPSL-CM4 climate model in which the surface bulk formula is modified. Analyzing both atmosphere and coupled ocean-atmosphere general circulation model (AGCM, OAGCM) simulations allows us to study the direct effect and the mechanisms of adjustment to this modification. We focus on the representation of latent heat flux in the tropics. We show that the heat transfer coefficients are highly similar for a given parameterization between AGCM and OAGCM simulations. Although the same areas are impacted in both kind of simulations, the differences in surface heat fluxes are substantial. A regional modification of heat transfer coefficient has more impact than uniform modification in AGCM simulations while in OAGCM simulations, the opposite is observed. By studying the global energetics and the atmospheric circulation response to the modification, we highlight the role of the ocean in dampening a large part of the disturbance. Modification of the heat exchange coefficient modifies the way the coupled system works due to the link between atmospheric circulation and SST, and the different feedbacks between ocean and atmosphere. The adjustment that takes place implies a balance of net incoming solar radiation that is the same in all simulations. As there is no change in model physics other than drag coefficient, we obtain similar latent heat flux

  20. Local Fitting of the Kohn-Sham Density in a Gaussian and Plane Waves Scheme for Large-Scale Density Functional Theory Simulations.

    Science.gov (United States)

    Golze, Dorothea; Iannuzzi, Marcella; Hutter, Jürg

    2017-05-09

    A local resolution-of-the-identity (LRI) approach is introduced in combination with the Gaussian and plane waves (GPW) scheme to enable large-scale Kohn-Sham density functional theory calculations. In GPW, the computational bottleneck is typically the description of the total charge density on real-space grids. Introducing the LRI approximation, the linear scaling of the GPW approach with respect to system size is retained, while the prefactor for the grid operations is reduced. The density fitting is an O(N) scaling process implemented by approximating the atomic pair densities by an expansion in one-center fit functions. The computational cost for the grid-based operations becomes negligible in LRIGPW. The self-consistent field iteration is up to 30 times faster for periodic systems dependent on the symmetry of the simulation cell and on the density of grid points. However, due to the overhead introduced by the local density fitting, single point calculations and complete molecular dynamics steps, including the calculation of the forces, are effectively accelerated by up to a factor of ∼10. The accuracy of LRIGPW is assessed for different systems and properties, showing that total energies, reaction energies, intramolecular and intermolecular structure parameters are well reproduced. LRIGPW yields also high quality results for extended condensed phase systems such as liquid water, ice XV, and molecular crystals.

  1. Numerical and experimental simulation of accident processes using KMS large-scale test facility under the program of training university students for nuclear power industry

    International Nuclear Information System (INIS)

    Aniskevich, Yu.N.

    2005-01-01

    The KMS large-scale test facility is being constructed at NITI site and designed to model accident processes in VVER reactor plants and provide experimental data for safety analysis of both existing and future NPPs. The KMS phase I is at the completion stage. This is a containment model of 2000 m3 volume intended for experimentally simulating heat and mass transfers of steam-gas mixtures and aerosols inside containment. The KMS phase II will incorporate a reactor model (1:27 scale) and be used for analysing a number of events including primary and secondary LOCA. The KMS program for background training of university students in the nuclear field will include preparation and conduction of experiments, analysis of experiment data. The KMS program for background training of university students in nuclear will include: participation in the development and application of experiment procedures, preparation and carrying out experiments; carrying out pretest and post-test calculations with different computer codes; on-the-job training as operators of experiment scenarios; training of specialists in measurement and information acquisition technologies. (author)

  2. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  3. Large Scale System Defense

    Science.gov (United States)

    2008-10-01

    NUMBER 00 5f. WORK UNIT NUMBER 01 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Columbia University 1700 Broadway New York NY 10019-5905 8...PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AFRL/RIGA 525 Brooks Rd. Rome NY 13441-4505...pealing because of the need to modify source code. Since source-level annotations serve as a vestigial policy, we articulated a way to augment self

  4. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  5. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  6. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  7. Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters

    KAUST Repository

    Wu, X.; Taylor, V.

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with fixed algorithms for evaluating parallel systems and tools. Multicore clusters provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node, and MPI can be used with the communication between nodes. In this paper, we use Scalar Pentadiagonal (SP) and Block Tridiagonal (BT) benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore clusters, Intrepid (BlueGene/P) at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76 %, and the hybrid BT outperforms the MPI BT by up to 8.58 % on up to 10 000 cores on Intrepid and Jaguar. We also use performance tools and MPI trace libraries available on these clusters to further investigate the performance characteristics of the hybrid SP and BT. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.

  8. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  9. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Science.gov (United States)

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  10. Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters

    KAUST Repository

    Wu, X.

    2011-07-18

    The NAS Parallel Benchmarks (NPB) are well-known applications with fixed algorithms for evaluating parallel systems and tools. Multicore clusters provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node, and MPI can be used with the communication between nodes. In this paper, we use Scalar Pentadiagonal (SP) and Block Tridiagonal (BT) benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore clusters, Intrepid (BlueGene/P) at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76 %, and the hybrid BT outperforms the MPI BT by up to 8.58 % on up to 10 000 cores on Intrepid and Jaguar. We also use performance tools and MPI trace libraries available on these clusters to further investigate the performance characteristics of the hybrid SP and BT. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.

  11. Large scale GW calculations

    International Nuclear Information System (INIS)

    Govoni, Marco; Argonne National Lab., Argonne, IL; Galli, Giulia; Argonne National Lab., Argonne, IL

    2015-01-01

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfaces with thousands of electrons

  12. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  13. Large-scale performance studies of the Resistive Plate Chamber fast tracker for the ATLAS 1st-level muon trigger

    CERN Document Server

    Cattani, G; The ATLAS collaboration

    2009-01-01

    In the ATLAS experiment, Resistive Plate Chambers provide the first-level muon trigger and bunch crossing identification over large area of the barrel region, as well as being used as a very fast 2D tracker. To achieve these goals a system of about ~4000 gas gaps operating in avalanche mode was built (resulting in a total readout surface of about 16000 m2 segmented into 350000 strips) and is now fully operational in the ATLAS pit, where its functionality has been widely tested up to now using cosmic rays. Such a large scale system allows to study the performance of RPCs (both from the point of view of gas gaps and readout electronics) with unprecedented sensitivity to rare effects, as well as providing the means to correlate (in a statistically significant way) characteristics at production sites with performance during operation. Calibrating such a system means fine tuning thousands of parameters (involving both front-end electronics and gap voltage), as well as constantly monitoring performance and environm...

  14. Metoder for Modellering, Simulering og Regulering af Større Termiske Processer anvendt i Sukkerproduktion. Methods for Modelling, Simulation and Control of Large Scale Thermal Systems Applied in Sugar Production

    DEFF Research Database (Denmark)

    Nielsen, Kirsten Mølgaard; Nielsen, Jens Frederik Dalsgaard

    The subject of this Ph.D. thesis is to investigate and develop methods for modelling, simulation and control applicable in large scale termal industrial plants. An ambition has been to evaluate the results in a physical process. Sugar production is well suited for the purpose. In collaboration...... simulator has been developed. The simulator handles the normal working conditions relevant to control engineers. A non-linear dynamic model based on mass and energy balances has been developed. The model parameters have been adjusted to data measured on a Danish sugar plant. The simulator consists...... of a computer, a data terminal and an electric interface corresponding to the interface at the sugar plant. The simulator is operating in realtime and thus a realistic test of controllers is possible. The idiomatic control methodology has been investigated developing a control concept for the evaporation...

  15. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  16. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim

    Directory of Open Access Journals (Sweden)

    José Medina Pestana

    Full Text Available Summary The kidney transplant program at Hospital do Rim (hrim is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym, has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001 and patient survival (90.5 vs. 95.1%, p=0.001. The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  17. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim.

    Science.gov (United States)

    Pestana, José Medina

    2016-10-01

    The kidney transplant program at Hospital do Rim (hrim) is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym), has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001) and patient survival (90.5 vs. 95.1%, p=0.001). The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  18. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  19. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  20. Performance of granular activated carbon to remove micropollutants from municipal wastewater-A meta-analysis of pilot- and large-scale studies.

    Science.gov (United States)

    Benstoem, Frank; Nahrstedt, Andreas; Boehler, Marc; Knopp, Gregor; Montag, David; Siegrist, Hansruedi; Pinnekamp, Johannes

    2017-10-01

    For reducing organic micropollutants (MP) in municipal wastewater effluents, granular activated carbon (GAC) has been tested in various studies. We did systematic literature research and found 44 studies dealing with the adsorption of MPs (carbamazepine, diclofenac, sulfamethoxazole) from municipal wastewater on GAC in pilot- and large-scale plants. Within our meta-analysis we plot the bed volumes (BV [m 3 water /m 3 GAC ]) until the breakthrough criterion of MP-BV20% was reached, dependent on potential relevant parameters (empty bed contact time EBCT, influent DOC DOC 0 and manufacturing method). Moreover, we performed statistical tests (ANOVAs) to check the results for significance. Single adsorbers operating time differs i.e. by 2500% until breakthrough of diclofenac-BV20% was reached (800-20,000 BV). There was still elimination of the "very well/well" adsorbable MPs such as carbamazepine and diclofenac even when the equilibrium of DOC had already been reached. No strong statistical significance of EBCT and DOC 0 on MP-BV20% could be found due to lack of data and the high heterogeneity of the studies using GAC of different qualities. In further studies, adsorbers should be operated ≫20,000 BV for exact calculation of breakthrough curves, and the following parameters should be recorded: selected MPs; DOC 0; UVA 254 ; EBCT; product name, manufacturing method and raw material of GAC; suspended solids (TSS); backwash interval; backwash program and pressure drop within adsorber. Based on our investigations we generally recommend using reactivated GAC to reduce the environmental impact and to carry out tests on pilot scale to collect reliable data for process design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Effects of cognitive design principles on user’s performance and preference : A large scale evaluation of a soccer stats display

    NARCIS (Netherlands)

    Westerbeek, Hans; van Amelsvoort, Marije; Maes, Fons; Swerts, Marc

    2014-01-01

    We present an analytic and a large scale experimental comparison of two informationally equivalent information displays of soccer statistics. Both displays were presented by the BBC during the 2010 FIFA World Cup. The displays mainly differ in terms of the number and types of cognitively natural

  2. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  3. Prediction of etching-shape anomaly due to distortion of ion sheath around a large-scale three-dimensional structure by means of on-wafer monitoring technique and computer simulation

    International Nuclear Information System (INIS)

    Kubota, Tomohiro; Ohtake, Hiroto; Araki, Ryosuke; Yanagisawa, Yuuki; Samukawa, Seiji; Iwasaki, Takuya; Ono, Kohei; Miwa, Kazuhiro

    2013-01-01

    A system for predicting distortion of a profile during plasma etching was developed. The system consists of a combination of measurement and simulation. An ‘on-wafer sheath-shape sensor’ for measuring the plasma-sheath parameters (sheath potential and thickness) on the stage of the plasma etcher was developed. The sensor has numerous small electrodes for measuring sheath potential and saturation ion-current density, from which sheath thickness can be calculated. The results of the measurement show reasonable dependence on source power, bias power and pressure. Based on self-consistent calculation of potential distribution and ion- and electron-density distributions, simulation of the sheath potential distribution around an arbitrary 3D structure and the trajectory of incident ions from the plasma to the structure was developed. To confirm the validity of the distortion prediction by comparing it with experimentally measured distortion, silicon trench etching under chlorine inductively coupled plasma (ICP) was performed using a sample with a vertical step. It was found that the etched trench was distorted when the distance from the step was several millimetres or less. The distortion angle was about 20° at maximum. Measurement was performed using the on-wafer sheath-shape sensor in the same plasma condition as the etching. The ion incident angle, calculated as a function of distance from the step, successfully reproduced the experimentally measured angle, indicating that the combination of measurement by the on-wafer sheath-shape sensor and simulation can predict distortion of an etched structure. This prediction system will be useful for designing devices with large-scale 3D structures (such as those in MEMS) and determining the optimum etching conditions to obtain the desired profiles. (paper)

  4. Impact of tissue atrophy on high-pass filtered MRI signal phase-based assessment in large-scale group-comparison studies: A simulation study

    Science.gov (United States)

    Schweser, Ferdinand; Dwyer, Michael G.; Deistung, Andreas; Reichenbach, Jürgen R.; Zivadinov, Robert

    2013-10-01

    The assessment of abnormal accumulation of tissue iron in the basal ganglia nuclei and in white matter plaques using the gradient echo magnetic resonance signal phase has become a research focus in many neurodegenerative diseases such as multiple sclerosis or Parkinson’s disease. A common and natural approach is to calculate the mean high-pass-filtered phase of previously delineated brain structures. Unfortunately, the interpretation of such an analysis requires caution: in this paper we demonstrate that regional gray matter atrophy, which is concomitant with many neurodegenerative diseases, may itself directly result in a phase shift seemingly indicative of increased iron concentration even without any real change in the tissue iron concentration. Although this effect is relatively small results of large-scale group comparisons may be driven by anatomical changes rather than by changes of the iron concentration.

  5. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  6. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  7. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  8. Numerical study of Tallinn storm-water system flooding conditions using CFD simulations of multi-phase flow in a large-scale inverted siphon

    Science.gov (United States)

    Kaur, K.; Laanearu, J.; Annus, I.

    2017-10-01

    The numerical experiments are carried out for qualitative and quantitative interpretation of a multi-phase flow processes associated with malfunctioning of the Tallinn storm-water system during rain storms. The investigations are focused on the single-line inverted siphon, which is used as under-road connection of pipes of the storm-water system under interest. A multi-phase flow solver of Computational Fluid Dynamics software OpenFOAM is used for simulating the three-phase flow dynamics in the hydraulic system. The CFD simulations are performed with different inflow rates under same initial conditions. The computational results are compared essentially in two cases 1) design flow rate and 2) larger flow rate, for emptying the initially filled inverted siphon from a slurry-fluid. The larger flow-rate situations are under particular interest to detected possible flooding. In this regard, it is anticipated that the CFD solutions provide an important insight to functioning of inverted siphon under a restricted water-flow conditions at simultaneous presence of air and slurry-fluid.

  9. Wind-tunnel investigation of the thrust augmentor performance of a large-scale swept wing model. [in the Ames 40 by 80 foot wind tunnel

    Science.gov (United States)

    Koenig, D. G.; Falarski, M. D.

    1979-01-01

    Tests were made in the Ames 40- by 80-foot wind tunnel to determine the forward speed effects on wing-mounted thrust augmentors. The large-scale model was powered by the compressor output of J-85 driven viper compressors. The flap settings used were 15 deg and 30 deg with 0 deg, 15 deg, and 30 deg aileron settings. The maximum duct pressure, and wind tunnel dynamic pressure were 66 cmHg (26 in Hg) and 1190 N/sq m (25 lb/sq ft), respectively. All tests were made at zero sideslip. Test results are presented without analysis.

  10. A large-scale simulation of climate change effects on flood regime - A case study for the Alabama-Coosa-Tallapoosa River Basin

    Science.gov (United States)

    Dullo, T. T.; Gangrade, S.; Marshall, R.; Islam, S. R.; Ghafoor, S. K.; Kao, S. C.; Kalyanapu, A. J.

    2017-12-01

    The damage and cost of flooding are continuously increasing due to climate change and variability, which compels the development and advance of global flood hazard models. However, due to computational expensiveness, evaluation of large-scale and high-resolution flood regime remains a challenge. The objective of this research is to use a coupled modeling framework that consists of a dynamically downscaled suite of eleven Coupled Model Intercomparison Project Phase 5 (CMIP5) climate models, a distributed hydrologic model called DHSVM, and a computational-efficient 2-dimensional hydraulic model called Flood2D-GPU to study the impacts of climate change on flood regime in the Alabama-Coosa-Tallapoosa (ACT) River Basin. Downscaled meteorologic forcings for 40 years in the historical period (1966-2005) and 40 years in the future period (2011-2050) were used as inputs to drive the calibrated DHSVM to generate annual maximum flood hydrographs. These flood hydrographs along with 30-m resolution digital elevation and estimated surface roughness were then used by Flood2D-GPU to estimate high-resolution flood depth, velocities, duration, and regime. Preliminary results for the Conasauga river basin (an upper subbasin within ACT) indicate that seven of the eleven climate projections show an average increase of 25 km2 in flooded area (between historic and future projections). Future work will focus on illustrating the effects of climate change on flood duration and area for the entire ACT basin.

  11. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  12. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  13. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection.

    Science.gov (United States)

    Toofanny, Rudesh D; Simms, Andrew M; Beck, David A C; Daggett, Valerie

    2011-08-10

    Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation

  14. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection

    Directory of Open Access Journals (Sweden)

    Toofanny Rudesh D

    2011-08-01

    Full Text Available Abstract Background Molecular dynamics (MD simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster. For a 'full' simulation trajectory (51 ns spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster. Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36% was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery

  15. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  16. Technical Design Report for large-scale neutrino detectors prototyping and phased performance assessment in view of a long-baseline oscillation experiment

    CERN Document Server

    De Bonis, I.; Duchesneau, D.; Pessard, H.; Bordoni, S.; Ieva, M.; Lux, T.; Sanchez, F.; Jipa, A.; Lazanu, I.; Calin, M.; Esanu, T.; Ristea, O.; Ristea, C.; Nita, L.; Efthymiopoulos, I.; Nessi, M.; Asfandiyarov, R.; Blondel, A.; Bravar, A.; Cadoux, F.; Haesler, A.; Karadzhov, Y.; Korzenev, A.; Martin, C.; Noah, E.; Ravonel, M.; Rayner, M.; Scantamburlo, E.; Bayes, R.; Soler, F.J.P.; Nuijten, G.A.; Loo, K.; Maalampi, J.; Slupecki, M.; Trzaska, W.H.; Campanelli, M.; Blebea-Apostu, A.M.; Chesneanu, D.; Gomoiu, M.C; Mitrica, B.; Margineanu, R.M.; Stanca, D.L.; Colino, N.; Gil-Botella, I.; Novella, P.; Palomares, C.; Santorelli, R.; Verdugo, A.; Karpikov, I.; Khotjantsev, A.; Kudenko, Y.; Mefodiev, A.; Mineev, O.; Ovsiannikova, T.; Yershov, N.; Enqvist, T.; Kuusiniemi, P.; De La Taille, C.; Dulucq, F.; Martin-Chassard, G.; Andrieu, B.; Dumarchez, J.; Giganti, C.; Levy, J.-M.; Popov, B.; Robert, A.; Agostino, L.; Buizza-Avanzini, M.; Dawson, J.; Franco, D.; Gorodetzky, P.; Kryn, D.; Patzak, T.; Tonazzo, A.; Vannucci, F.; Bésida, O.; Bolognesi, S.; Delbart, A.; Emery, S.; Galymov, V.; Mazzucato, E.; Vasseur, G.; Zito, M.; Bogomilov, M.; Tsenov, R.; Vankova-Kirilova, G.; Friend, M.; Hasegawa, T.; Nakadaira, T.; Sakashita, K.; Zambelli, L.; Autiero, D.; Caiulo, D.; Chaussard, L.; Déclais, Y.; Franco, D.; Marteau, J.; Pennacchio, E.; Bay, F.; Cantini, C.; Crivelli, P.; Epprecht, L.; Gendotti, A.; Di Luise, S.; Horikawa, S.; Murphy, S.; Nikolics, K.; Periale, L.; Regenfus, C.; Rubbia, A.; Sgalaberna, D.; Viant, T.; Wu, S.; Sergiampietri, F.; CERN. Geneva. SPS and PS Experiments Committee; SPSC

    2014-01-01

    In June 2012, an Expression of Interest for a long-baseline experiment (LBNO, CERN-SPSC-EOI-007) has been submitted to the CERN SPSC and is presently under review. LBNO considers three types of neutrino detector technologies: a double-phase liquid argon (LAr) TPC and a magnetised iron detector as far detectors. For the near detector, a high-pressure gas TPC embedded in a calorimeter and a magnet is the baseline design. A mandatory milestone in view of any future long baseline experiment is a concrete prototyping effort towards the envisioned large-scale detectors, and an accompanying campaign of measurements aimed at assessing the systematic errors that will be affecting their intended physics programme. Following an encouraging feedback from 108th SPSC on the technology choices, we have defined as priority the construction and operation of a $6\\times 6\\times 6$m$^3$ (active volume) double-phase liquid argon (DLAr) demonstrator, and a parallel development of the technologies necessary for large magnetised MIN...

  17. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  18. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  19. A massively parallel algorithm for the solution of constrained equations of motion with applications to large-scale, long-time molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fijany, A. [Jet Propulsion Lab., Pasadena, CA (United States); Coley, T.R. [Virtual Chemistry, Inc., San Diego, CA (United States); Cagin, T.; Goddard, W.A. III [California Institute of Technology, Pasadena, CA (United States)

    1997-12-31

    Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does not scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.

  20. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  1. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  2. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  3. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  4. Simulations of Cyclone Sidr in the Bay of Bengal with a High-Resolution Model: Sensitivity to Large-Scale Boundary Forcing

    Science.gov (United States)

    Kumar, Anil; Done, James; Dudhia, Jimy; Niyogi, Dev

    2011-01-01

    The predictability of Cyclone Sidr in the Bay of Bengal was explored in terms of track and intensity using the Advanced Research Hurricane Weather Research Forecast (AHW) model. This constitutes the first application of the AHW over an area that lies outside the region of the North Atlantic for which this model was developed and tested. Several experiments were conducted to understand the possible contributing factors that affected Sidr s intensity and track simulation by varying the initial start time and domain size. Results show that Sidr s track was strongly controlled by the synoptic flow at the 500-hPa level, seen especially due to the strong mid-latitude westerly over north-central India. A 96-h forecast produced westerly winds over north-central India at the 500-hPa level that were notably weaker; this likely caused the modeled cyclone track to drift from the observed actual track. Reducing the model domain size reduced model error in the synoptic-scale winds at 500 hPa and produced an improved cyclone track. Specifically, the cyclone track appeared to be sensitive to the upstream synoptic flow, and was, therefore, sensitive to the location of the western boundary of the domain. However, cyclone intensity remained largely unaffected by this synoptic wind error at the 500-hPa level. Comparison of the high resolution, moving nested domain with a single coarser resolution domain showed little difference in tracks, but resulted in significantly different intensities. Experiments on the domain size with regard to the total precipitation simulated by the model showed that precipitation patterns and 10-m surface winds were also different. This was mainly due to the mid-latitude westerly flow across the west side of the model domain. The analysis also suggested that the total precipitation pattern and track was unchanged when the domain was extended toward the east, north, and south. Furthermore, this highlights our conclusion that Sidr was influenced from the west

  5. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  6. X-ray clusters in a cold dark matter + lambda universe: A direct, large-scale, high-resolution, hydrodynamic simulation

    Science.gov (United States)

    Cen, Renyue; Ostriker, Jeremiah P.

    1994-01-01

    A new, three-dimensional, shock-capturing, hydrodynamic code is utilized to determine the distribution of hot gas in a cold dark matter (CDM) + lambda model universe. Periodic boundary conditions are assumed: a box with size 85/h Mpc, having cell size 0.31/h Mpc, is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. We adopt omega = 0.45, lambda = 0.55, h identically equal to H/100 km/s/Mpc = 0.6, and then, from the cosmic background explorer (COBE) and light element nucleosynthesis, sigma(sub 8) = 0.77, omega(sub b) = 0.043. We identify the X-ray emitting clusters in the simulation box, compute the luminosity function at several wavelength bands, the temperature function and estimated sizes, as well as the evolution of these quantities with redshift. This open model succeeds in matching local observations of clusters in contrast to the standard omega = 1, CDM model, which fails. It predicts an order of magnitude decline in the number density of bright (h nu = 2-10 keV) clusters from z = 0 to z = 2 in contrast to a slight increase in the number density for standard omega = 1, CDM model. This COBE-normalized CDM + lambda model produces approximately the same number of X-ray clusters having L(sub x) greater than 10(exp 43) erg/s as observed. The background radiation field at 1 keV due to clusters is approximately the observed background which, after correction for numerical effects, again indicates that the model is consistent with observations.

  7. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  8. Large-Scale Molecular Simulations on the Mechanical Response and Failure Behavior of a defective Graphene: Cases of 5-8-5 Defects

    Science.gov (United States)

    Wang, Shuaiwei; Yang, Baocheng; Yuan, Jinyun; Si, Yubing; Chen, Houyang

    2015-10-01

    Understanding the effect of defects on mechanical responses and failure behaviors of a graphene membrane is important for its applications. As examples, in this paper, a family of graphene with various 5-8-5 defects are designed and their mechanical responses are investigated by employing molecular dynamics simulations. The dependence of fracture strength and strain as well as Young’s moduli on the nearest neighbor distance and defect types is examined. By introducing the 5-8-5 defects into graphene, the fracture strength and strain become smaller. However, the Young’s moduli of DL (Linear arrangement of repeat unit 5-8-5 defect along zigzag-direction of graphene), DS (a Slope angle between repeat unit 5-8-5 defect and zigzag direction of graphene) and DZ (Zigzag-like 5-8-5 defects) defects in the zigzag direction become larger than those in the pristine graphene in the same direction. A maximum increase of 11.8% of Young’s modulus is obtained. Furthermore, the brittle cracking mechanism is proposed for the graphene with 5-8-5 defects. The present work may provide insights in controlling the mechanical properties by preparing defects in the graphene, and give a full picture for the applications of graphene with defects in flexible electronics and nanodevices.

  9. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  10. Atomistic mechanism of graphene growth on a SiC substrate: Large-scale molecular dynamics simulations based on a new charge-transfer bond-order type potential

    Science.gov (United States)

    Takamoto, So; Yamasaki, Takahiro; Nara, Jun; Ohno, Takahisa; Kaneta, Chioko; Hatano, Asuka; Izumi, Satoshi

    2018-03-01

    Thermal decomposition of silicon carbide is a promising approach for the fabrication of graphene. However, the atomistic growth mechanism of graphene remains unclear. This paper describes the development of a new charge-transfer interatomic potential. Carbon bonds with a wide variety of characteristics can be reproduced by the proposed vectorized bond-order term. A large-scale thermal decomposition simulation enables us to observe the continuous growth process of the multiring carbon structure. The annealing simulation reveals the atomistic process by which the multiring carbon structure is transformed to flat graphene involving only six-membered rings. Also, it is found that the surface atoms of the silicon carbide substrate enhance the homogeneous graphene formation.

  11. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  12. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  13. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  14. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  15. Quantum-chemistry based calibration of the alkali metal cation series (Li(+)-Cs(+)) for large-scale polarizable molecular mechanics/dynamics simulations.

    Science.gov (United States)

    Dudev, Todor; Devereux, Mike; Meuwly, Markus; Lim, Carmay; Piquemal, Jean-Philip; Gresh, Nohad

    2015-02-15

    The alkali metal cations in the series Li(+)-Cs(+) act as major partners in a diversity of biological processes and in bioinorganic chemistry. In this article, we present the results of their calibration in the context of the SIBFA polarizable molecular mechanics/dynamics procedure. It relies on quantum-chemistry (QC) energy-decomposition analyses of their monoligated complexes with representative O-, N-, S-, and Se- ligands, performed with the aug-cc-pVTZ(-f) basis set at the Hartree-Fock level. Close agreement with QC is obtained for each individual contribution, even though the calibration involves only a limited set of cation-specific parameters. This agreement is preserved in tests on polyligated complexes with four and six O- ligands, water and formamide, indicating the transferability of the procedure. Preliminary extensions to density functional theory calculations are reported. © 2014 Wiley Periodicals, Inc.

  16. Coupling of latent heat flux and the greenhouse effect by large-scale tropical/subtropical dynamics diagnosed in a set of observations and model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gershunov, A. [Climate Research Division, Scripps Institution of Oceanography, La Jolla, CA 92093-0224 (United States); Roca, R. [Laboratoire de Meteorologie Dynamique, Ecole Polytechnique, 91128 Palaiseau (France)

    2004-03-01

    Coupled variability of the greenhouse effect (GH) and latent heat flux (LHF) over the tropical - subtropical oceans is described, summarized and compared in observations and a coupled ocean-atmosphere general circulation model (CGCM). Coupled seasonal and interannual modes account for much of the total variability in both GH and LHF. In both observations and model, seasonal coupled variability is locally 180 out-of-phase throughout the tropics. Moisture is brought into convergent/convective regions from remote source areas located partly in the opposite, non-convective hemisphere. On interannual time scales, the tropical Pacific GH in the ENSO region of largest interannual variance is 180 out of phase with local LHF in observations but in phase in the model. A local source of moisture is thus present in the model on interannual time scales while in observations, moisture is mostly advected from remote source regions. The latent cooling and radiative heating of the surface as manifested in the interplay of LHF and GH is an important determinant of the current climate. Moreover, the hydrodynamic processes involved in the GH-LHF interplay determine in large part the climate response to external perturbations mainly through influencing the water vapor feedback but also through their intimate connection to the hydrological cycle. The diagnostic process proposed here can be performed on other CGCMs. Similarly, it should be repeated using a number of observational latent heat flux datasets to account for the variability in the different satellite retrievals. A realistic CGCM could be used to further study these coupled dynamics in natural and anthropogenically altered climate conditions. (orig.)

  17. The solar noise barrier project: 1. Effect of incident light orientation on the performance of a large-scale luminescent solar concentrator noise barrier

    NARCIS (Netherlands)

    Kanellis, M.; de Jong, M.; Slooff, L.H.; Debije, M.G.

    2017-01-01

    In this work we describe the relative performance of the largest luminescent solar concentrator (LSC) constructed to date. Comparisons are made for performance of North/South and East/West facing panels during a sunny day. It is shown that the East/West panels display much more varied performance

  18. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  19. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  20. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  1. Concepts for Large Scale Hydrogen Production

    OpenAIRE

    Jakobsen, Daniel; Åtland, Vegar

    2016-01-01

    The objective of this thesis is to perform a techno-economic analysis of large-scale, carbon-lean hydrogen production in Norway, in order to evaluate various production methods and estimate a breakeven price level. Norway possesses vast energy resources and the export of oil and gas is vital to the country s economy. The results of this thesis indicate that hydrogen represents a viable, carbon-lean opportunity to utilize these resources, which can prove key in the future of Norwegian energy e...

  2. Performance and Feasibility Analysis of a Grid Interactive Large Scale Wind/PV Hybrid System based on Smart Grid Methodology Case Study South Part – Jordan

    Directory of Open Access Journals (Sweden)

    Qais H. Alsafasfeh

    2015-02-01

    Full Text Available Most recent research on renewable energy resources main one goal to make Jordan less dependent on imported energy with locally developed and produced solar power, this paper discussed the efficient system of Wind/ PV Hybrid System to be than main power sources for south part of Jordan, the proposed hybrid system design based on Smart Grid Methodology,  the solar energy will be installed on top roof of  electricity subscribers across the Governorate of Maan, Tafila, Karak and Aqaba and the wind energy will set in one site by this way the capital cost for project will be reduced also the  simulation result show   the feasibility  is a very competitive and feasible cost . Economics analysis of a proposed renewable energy system was made using HOMER simulation and evaluation was completed with the cost per kilowatt of EDCO company, the net present cost is $2,551,676,416, the cost of energy is 0.07kWhr with a renewable fraction of 86.6 %.

  3. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  4. The solar noise barrier project : 2. The effect of street art on performance of a large scale luminescent solar concentrator prototype

    NARCIS (Netherlands)

    Debije, M.G.; Tzikas, C.; Rajkumar, V.A.; de Jong, M.

    2017-01-01

    Noise barriers have been used worldwide to reduce the impact of sound generated from traffic on nearby areas. A common feature to appear on these noise barriers are all manner of graffiti and street art. In this work we describe the relative performance of a large area luminescent solar concentrator

  5. Computational Typologies of Multidimensional End-of-Primary-School Performance Profiles from an Educational Perspective of Large-Scale TIMSS and PIRLS Surveys

    Science.gov (United States)

    Unlu, Ali; Schurig, Michael

    2015-01-01

    Recently, performance profiles in reading, mathematics and science were created using the data collectively available in the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS) 2011. In addition, a classification of children to the end of their primary school years was…

  6. Foster Wheeler's Solutions for Large Scale CFB Boiler Technology: Features and Operational Performance of Łagisza 460 MWe CFB Boiler

    Science.gov (United States)

    Hotta, Arto

    During recent years, once-through supercritical (OTSC) CFB technology has been developed, enabling the CFB technology to proceed to medium-scale (500 MWe) utility projects such as Łagisza Power Plant in Poland owned by Poludniowy Koncern Energetyczny SA. (PKE), with net efficiency nearly 44%. Łagisza power plant is currently under commissioning and has reached full load operation in March 2009. The initial operation shows very good performance and confirms, that the CFB process has no problems with the scaling up to this size. Also the once-through steam cycle utilizing Siemens' vertical tube Benson technology has performed as predicted in the CFB process. Foster Wheeler has developed the CFB design further up to 800 MWe with net efficiency of ≥45%.

  7. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  8. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  9. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  10. Direct large-scale synthesis of 3D hierarchical mesoporous NiO microspheres as high-performance anode materials for lithium ion batteries.

    Science.gov (United States)

    bai, Zhongchao; Ju, Zhicheng; Guo, Chunli; Qian, Yitai; Tang, Bin; Xiong, Shenglin

    2014-03-21

    Hierarchically porous materials are an ideal material platform for constructing high performance Li-ion batteries (LIBs), offering great advantages such as large contact area between the electrode and the electrolyte, fast and flexible transport pathways for the electrolyte ions and the space for buffering the strain caused by repeated Li insertion/extraction. In this work, NiO microspheres with hierarchically porous structures have been synthesized via a facile thermal decomposition method by only using a simple precursor. The superstructures are composed of nanocrystals with high specific surface area, large pore volume, and broad pore size distribution. The electrochemical properties of 3D hierarchical mesoporous NiO microspheres were examined by cyclic voltammetry and galvanostatic charge-discharge studies. The results demonstrate that the as-prepared NiO nanospheres are excellent electrode materials in LIBs with high specific capacity, good retention and rate performance. The 3D hierarchical mesoporous NiO microspheres can retain a reversible capacity of 800.2 mA h g(-1) after 100 cycles at a high current density of 500 mA g(-1).

  11. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  12. Technology for Large-Scale Translation of Clinical Practice Guidelines: A Pilot Study of the Performance of a Hybrid Human and Computer-Assisted Approach.

    Science.gov (United States)

    Van de Velde, Stijn; Macken, Lieve; Vanneste, Koen; Goossens, Martine; Vanschoenbeek, Jan; Aertgeerts, Bert; Vanopstal, Klaar; Vander Stichele, Robert; Buysschaert, Joost

    2015-10-09

    The construction of EBMPracticeNet, a national electronic point-of-care information platform in Belgium, began in 2011 to optimize quality of care by promoting evidence-based decision making. The project involved, among other tasks, the translation of 940 EBM Guidelines of Duodecim Medical Publications from English into Dutch and French. Considering the scale of the translation process, it was decided to make use of computer-aided translation performed by certificated translators with limited expertise in medical translation. Our consortium used a hybrid approach, involving a human translator supported by a translation memory (using SDL Trados Studio), terminology recognition (using SDL MultiTerm terminology databases) from medical terminology databases, and support from online machine translation. This resulted in a validated translation memory, which is now in use for the translation of new and updated guidelines. The objective of this experiment was to evaluate the performance of the hybrid human and computer-assisted approach in comparison with translation unsupported by translation memory and terminology recognition. A comparison was also made with the translation efficiency of an expert medical translator. We conducted a pilot study in which two sets of 30 new and 30 updated guidelines were randomized to one of three groups. Comparable guidelines were translated (1) by certificated junior translators without medical specialization using the hybrid method, (2) by an experienced medical translator without this support, and (3) by the same junior translators without the support of the validated translation memory. A medical proofreader who was blinded for the translation procedure, evaluated the translated guidelines for acceptability and adequacy. Translation speed was measured by recording translation and post-editing time. The human translation edit rate was calculated as a metric to evaluate the quality of the translation. A further evaluation was made of

  13. Experiments performed on a man-made crack in the flat low-permeability basement as a basis for large-scale technical extraction of terrestrial heat

    Energy Technology Data Exchange (ETDEWEB)

    Kappelmeyer, O.; Jung, R.; Rummel, F.

    1984-01-01

    Research work is performed on an in-situ experimental field in the crystalline subsoil near Falkenberg in East Bavaria which are to help develop new technologies for exploiting geothermal energy. The aim is to make terrestrial heat available for technical utilization even with a relatively normal geologic structure of the subsoil - i.e. far away from volcanos and outside of layers carrying water or steam. To achieve this objective, artificial heat exchange systems were produced by hydraulic fracturing of crystalline rocks at a depth of 250 m. Geometric positions of these cracks were located by means of seismic and geo-electric methods. Seismic observations allowed deriving a crack model which helped with penetrating the man-made crack by sectional drilling. The circulation system consisting in production drill-hole, crack system and sectional drill-hole was studied for hydraulic parameter (e.g. flow resistance) and thermal efficiency at various pressure levels in the crack. Crack width was measured at different pressure stages for the first time. Thermal model calculations allow transferral of the results gained from the flat relatively cool basement to basement areas of an elevated temperature. A number of rock parameters which are relevant for an assessment whether or not the subsoil is suitable for creating artificial heat exchange systems, were examined on-site and bench-scale.

  14. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  15. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram; Kammoun, Abla; Alouini, Mohamed-Slim

    2017-01-01

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  16. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  17. Breaking wave impact on offshore tripod structures. Comparison of large scale experiments, CFD simulations, and DIN recommended practice; Wellenbrechen an Offshore Tripod-Gruendungen. Versuche und Simulationen im Vergleich zu Richtlinien

    Energy Technology Data Exchange (ETDEWEB)

    Hildebrandt, Arndt; Schlurmann, Torsten [Hannover Univ. (Germany). Franzius-Institut fuer Wasserbau und Kuesteningenieurwesen

    2012-05-15

    Coastal and near shore areas offer a large potential for offshore wind energy production due to strong and steady wind conditions. Thousands of offshore wind energy converters are projected for mass production within the next years. Detailed understanding of the extreme, dynamic wave loads on offshore structures is essential for an efficient design. The impact on structures is a complex process and further studies are required for more detailed load assessments, which is why breaking wave loads were investigated by the research project ''GIGAWIND alpha ventus - Subproject 1'' within the network ''Research at Alpha VEntus'' (RAVE). Large scale laboratory tests (1:12) with breaking waves have been carried out at the Large Wave Flume of the ''Forschungszentrum Kueste'' (FZK, Hanover) to reveal more detailed insights on the impact area, duration and development of the wave induced momentum, and intensity of pressures. In addition, local pressures calculated by a three-dimensional numerical impact simulation are compared to the Large Wave Flume experiments. Slamming coefficients have been derived from the physical model tests and CFD simulations for the comparison to load calculations based on guidelines. (orig.)

  18. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  19. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  20. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  1. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  2. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  3. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  4. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  5. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  6. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  7. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  8. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  9. Ultra-large scale synthesis of high electrochemical performance SnO{sub 2} quantum dots within 5 min at room temperature following a growth self-termination mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Hongtao, E-mail: htcui@ytu.edu.cn; Xue, Junying; Ren, Wanzhong; Wang, Minmin

    2015-10-05

    Highlights: • SnO{sub 2} quantum dots were prepared at an ultra-large scale at room temperature within 5 min. • The grinding of SnCl{sub 2}⋅2H{sub 2}O and ammonium persulphate with morpholine produces quantum dots. • The reactions were self-terminated through the rapid consumption of water. • The obtained SnO{sub 2} quantum dots own high electrochemical performance. - Abstract: SnO{sub 2} quantum dots are prepared at an ultra-large scale by a productive synthetic procedure without using any organic ligand. The grinding of solid mixture of SnCl{sub 2}⋅2H{sub 2}O and ammonium persulphate with morpholine in a mortar at room temperature produces 1.2 nm SnO{sub 2} quantum dots within 5 min. The formation of SnO{sub 2} is initiated by the reaction between tin ions and hydroxyl groups generated from hydrolysis of morpholine in the released hydrate water from SnCl{sub 2}⋅2H{sub 2}O. It is considered that as water is rapidly consumed by the hydrolysis reaction of morpholine, the growth process of particles is self-terminated immediately after their transitory period of nucleation and growth. As a result of simple procedure and high toleration to scaling up of preparation, at least 50 g of SnO{sub 2} quantum dots can be produced in one batch in our laboratory. The as prepared quantum dots present high electrochemical performance due to the effective faradaic reaction and the alternative trapping of electrons and holes.

  10. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  11. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  12. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  13. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the

  14. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  15. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  16. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  17. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  18. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  19. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  20. Iodine oxides in large-scale THAI tests

    International Nuclear Information System (INIS)

    Funke, F.; Langrock, G.; Kanzleiter, T.; Poss, G.; Fischer, K.; Kühnel, A.; Weber, G.; Allelein, H.-J.

    2012-01-01

    Highlights: ► Iodine oxide particles were produced from gaseous iodine and ozone. ► Ozone replaced the effect of ionizing radiation in the large-scale THAI facility. ► The mean diameter of the iodine oxide particles was about 0.35 μm. ► Particle formation was faster than the chemical reaction between iodine and ozone. ► Deposition of iodine oxide particles was slow in the absence of other aerosols. - Abstract: The conversion of gaseous molecular iodine into iodine oxide aerosols has significant relevance in the understanding of the fission product iodine volatility in a LWR containment during severe accidents. In containment, the high radiation field caused by fission products released from the reactor core induces radiolytic oxidation into iodine oxides. To study the characteristics and the behaviour of iodine oxides in large scale, two THAI tests Iod-13 and Iod-14 were performed, simulating radiolytic oxidation of molecular iodine by reaction of iodine with ozone, with ozone injected from an ozone generator. The observed iodine oxides form submicron particles with mean volume-related diameters of about 0.35 μm and show low deposition rates in the THAI tests performed in the absence of other nuclear aerosols. Formation of iodine aerosols from gaseous precursors iodine and ozone is fast as compared to their chemical interaction. The current approach in empirical iodine containment behaviour models in severe accidents, including the radiolytic production of I 2 -oxidizing agents followed by the I 2 oxidation itself, is confirmed by these THAI tests.

  1. DECOVALEX III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    International Nuclear Information System (INIS)

    Andersson, Johan; Staub, Isabelle; Knight, Les

    2005-02-01

    The Benchmark Test 2 of DECOVALEX III and Work Package 3 of BENCHPAR concerns the upscaling Thermal (T), Hydrological (H) and Mechanical (M) processes in a fractured rock mass and its significance for large-scale repository performance assessment. The work is primarily concerned with the extent to which various thermo-hydro-mechanical couplings in a fractured rock mass adjacent to a repository are significant in terms of solute transport typically calculated in large-scale repository performance assessments. Since the presence of even quite small fractures may control the hydraulic, mechanical and coupled hydromechanical behaviour of the rock mass, a key of the work has been to explore the extent to which these can be upscaled and represented by 'equivalent' continuum properties appropriate PA calculations. From these general aims the BMT was set-up as a numerical study of a large scale reference problem. Analysing this reference problem should: help explore how different means of simplifying the geometrical detail of a site, with its implications on model parameters, ('upscaling') impacts model predictions of relevance to repository performance, explore to what extent the THM-coupling needs to be considered in relation to PA-measures, compare the uncertainties in upscaling (both to uncertainty on how to upscale or uncertainty that arises due to the upscaling processes) and consideration of THM couplings with the inherent uncertainty and spatial variability of the site specific data. Furthermore, it has been an essential component of the work that individual teams not only produce numerical results but are forced to make their own judgements and to provide the proper justification for their conclusions based on their analysis. It should also be understood that conclusions drawn will partly be specific to the problem analysed, in particular as it mainly concerns a 2D application. This means that specific conclusions may have limited applicability to real problems in

  2. DECOVALEX III III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan (comp.) [JA Streamflow AB, Aelvsjoe (Sweden); Staub, Isabelle (comp.) [Golder Associates AB, Stockholm (Sweden); Knight, Les (comp.) [Nirex UK Ltd, Oxon (United Kingdom)

    2005-02-15

    The Benchmark Test 2 of DECOVALEX III and Work Package 3 of BENCHPAR concerns the upscaling Thermal (T), Hydrological (H) and Mechanical (M) processes in a fractured rock mass and its significance for large-scale repository performance assessment. The work is primarily concerned with the extent to which various thermo-hydro-mechanical couplings in a fractured rock mass adjacent to a repository are significant in terms of solute transport typically calculated in large-scale repository performance assessments. Since the presence of even quite small fractures may control the hydraulic, mechanical and coupled hydromechanical behaviour of the rock mass, a key of the work has been to explore the extent to which these can be upscaled and represented by 'equivalent' continuum properties appropriate PA calculations. From these general aims the BMT was set-up as a numerical study of a large scale reference problem. Analysing this reference problem should: help explore how different means of simplifying the geometrical detail of a site, with its implications on model parameters, ('upscaling') impacts model predictions of relevance to repository performance, explore to what extent the THM-coupling needs to be considered in relation to PA-measures, compare the uncertainties in upscaling (both to uncertainty on how to upscale or uncertainty that arises due to the upscaling processes) and consideration of THM couplings with the inherent uncertainty and spatial variability of the site specific data. Furthermore, it has been an essential component of the work that individual teams not only produce numerical results but are forced to make their own judgements and to provide the proper justification for their conclusions based on their analysis. It should also be understood that conclusions drawn will partly be specific to the problem analysed, in particular as it mainly concerns a 2D application. This means that specific conclusions may have limited applicability

  3. The role of large scale motions on passive scalar transport

    Science.gov (United States)

    Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano

    2014-11-01

    We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.

  4. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  5. In situ vitrification large-scale operational acceptance test analysis

    International Nuclear Information System (INIS)

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack

  6. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  7. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  8. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  9. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  10. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  11. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  12. Impact of large scale flows on turbulent transport

    Energy Technology Data Exchange (ETDEWEB)

    Sarazin, Y [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Grandgirard, V [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Dif-Pradalier, G [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Fleurence, E [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Garbet, X [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Ghendrih, Ph [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Bertrand, P [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Besse, N [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Crouseilles, N [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Sonnendruecker, E [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Latu, G [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France); Violard, E [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France)

    2006-12-15

    The impact of large scale flows on turbulent transport in magnetized plasmas is explored by means of various kinetic models. Zonal flows are found to lead to a non-linear upshift of turbulent transport in a 3D kinetic model for interchange turbulence. Such a transition is absent from fluid simulations, performed with the same numerical tool, which also predict a much larger transport. The discrepancy cannot be explained by zonal flows only, despite they being overdamped in fluids. Indeed, some difference remains, although reduced, when they are artificially suppressed. Zonal flows are also reported to trigger transport barriers in a 4D drift-kinetic model for slab ion temperature gradient (ITG) turbulence. The density gradient acts as a source drive for zonal flows, while their curvature back stabilizes the turbulence. Finally, 5D simulations of toroidal ITG modes with the global and full-f GYSELA code require the equilibrium density function to depend on the motion invariants only. If not, the generated strong mean flows can completely quench turbulent transport.

  13. Impact of large scale flows on turbulent transport

    International Nuclear Information System (INIS)

    Sarazin, Y; Grandgirard, V; Dif-Pradalier, G; Fleurence, E; Garbet, X; Ghendrih, Ph; Bertrand, P; Besse, N; Crouseilles, N; Sonnendruecker, E; Latu, G; Violard, E

    2006-01-01

    The impact of large scale flows on turbulent transport in magnetized plasmas is explored by means of various kinetic models. Zonal flows are found to lead to a non-linear upshift of turbulent transport in a 3D kinetic model for interchange turbulence. Such a transition is absent from fluid simulations, performed with the same numerical tool, which also predict a much larger transport. The discrepancy cannot be explained by zonal flows only, despite they being overdamped in fluids. Indeed, some difference remains, although reduced, when they are artificially suppressed. Zonal flows are also reported to trigger transport barriers in a 4D drift-kinetic model for slab ion temperature gradient (ITG) turbulence. The density gradient acts as a source drive for zonal flows, while their curvature back stabilizes the turbulence. Finally, 5D simulations of toroidal ITG modes with the global and full-f GYSELA code require the equilibrium density function to depend on the motion invariants only. If not, the generated strong mean flows can completely quench turbulent transport

  14. Relativistic jets without large-scale magnetic fields

    Science.gov (United States)

    Parfrey, K.; Giannios, D.; Beloborodov, A.

    2014-07-01

    The canonical model of relativistic jets from black holes requires a large-scale ordered magnetic field to provide a significant magnetic flux through the ergosphere--in the Blandford-Znajek process, the jet power scales with the square of the magnetic flux. In many jet systems the presence of the required flux in the environment of the central engine is questionable. I will describe an alternative scenario, in which jets are produced by the continuous sequential accretion of small magnetic loops. The magnetic energy stored in these coronal flux systems is amplified by the differential rotation of the accretion disc and by the rotating spacetime of the black hole, leading to runaway field line inflation, magnetic reconnection in thin current layers, and the ejection of discrete bubbles of Poynting-flux-dominated plasma. For illustration I will show the results of general-relativistic force-free electrodynamic simulations of rotating black hole coronae, performed using a new resistivity model. The dissipation of magnetic energy by coronal reconnection events, as demonstrated in these simulations, is a potential source of the observed high-energy emission from accreting compact objects.

  15. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2014-01-01

    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  16. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  17. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  18. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  19. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  20. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  1. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  2. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  3. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  4. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  5. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  6. On soft limits of large-scale structure correlation functions

    International Nuclear Information System (INIS)

    Sagunski, Laura

    2016-08-01

    background method to the case of a directional soft mode, being absorbed into a locally curved anisotropic background cosmology. The resulting non-perturbative power spectrum equation encodes the coupling to ultraviolet (UV) modes in two time-dependent coefficients. These can most generally be inferred from response functions to geometrical parameters, such as spatial curvature, in the locally curved anisotropic background cosmology. However, we can determine one coefficient by use of the angular-averaged bispectrum consistency condition together with the generalized VKPR proposal, and we show that the impact of the other one is subleading. Neglecting the latter in consequence, we confront the non-perturbative power spectrum equation against numerical simulations and find indeed a very good agreement within the expected error bars. Moreover, we argue that both coefficients and thus the non-perturbative power spectrum in the soft limit depend only weakly on UV modes deep in the non-linear regime. This non-perturbative finding allows us in turn to derive important implications for perturbative approaches to large-scale structure formation. First, it leads to the conclusion that the UV dependence of the power spectrum found in explicit computations within standard perturbation theory is an artifact. Second, it implies that in the Eulerian (Lagrangian) effective field theory (EFT) approach, where UV divergences are canceled by counter-terms, the renormalized leading-order coefficient(s) receive most contributions from modes close to the non-linear scale. The non-perturbative approach we developed can in principle be used to precisely infer the size of these renormalized leading-order EFT coefficient(s) by performing small-volume numerical simulations within an anisotropic 'separate universe' framework. Our results suggest that the importance of these coefficient(s) is a ∝10% effect at most.

  7. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    background method to the case of a directional soft mode, being absorbed into a locally curved anisotropic background cosmology. The resulting non-perturbative power spectrum equation encodes the coupling to ultraviolet (UV) modes in two time-dependent coefficients. These can most generally be inferred from response functions to geometrical parameters, such as spatial curvature, in the locally curved anisotropic background cosmology. However, we can determine one coefficient by use of the angular-averaged bispectrum consistency condition together with the generalized VKPR proposal, and we show that the impact of the other one is subleading. Neglecting the latter in consequence, we confront the non-perturbative power spectrum equation against numerical simulations and find indeed a very good agreement within the expected error bars. Moreover, we argue that both coefficients and thus the non-perturbative power spectrum in the soft limit depend only weakly on UV modes deep in the non-linear regime. This non-perturbative finding allows us in turn to derive important implications for perturbative approaches to large-scale structure formation. First, it leads to the conclusion that the UV dependence of the power spectrum found in explicit computations within standard perturbation theory is an artifact. Second, it implies that in the Eulerian (Lagrangian) effective field theory (EFT) approach, where UV divergences are canceled by counter-terms, the renormalized leading-order coefficient(s) receive most contributions from modes close to the non-linear scale. The non-perturbative approach we developed can in principle be used to precisely infer the size of these renormalized leading-order EFT coefficient(s) by performing small-volume numerical simulations within an anisotropic 'separate universe' framework. Our results suggest that the importance of these coefficient(s) is a ∝10% effect at most.

  8. 24th & 25th Joint Workshop on Sustained Simulation Performance

    CERN Document Server

    Bez, Wolfgang; Focht, Erich; Gienger, Michael; Kobayashi, Hiroaki

    2017-01-01

    This book presents the state of the art in High Performance Computing on modern supercomputer architectures. It addresses trends in hardware and software development in general, as well as the future of High Performance Computing systems and heterogeneous architectures. The contributions cover a broad range of topics, from improved system management to Computational Fluid Dynamics, High Performance Data Analytics, and novel mathematical approaches for large-scale systems. In addition, they explore innovative fields like coupled multi-physics and multi-scale simulations. All contributions are based on selected papers presented at the 24th Workshop on Sustained Simulation Performance, held at the University of Stuttgart’s High Performance Computing Center in Stuttgart, Germany in December 2016 and the subsequent Workshop on Sustained Simulation Performance, held at the Cyberscience Center, Tohoku University, Japan in March 2017.

  9. Large scale particle image velocimetry with helium filled soap bubbles

    Energy Technology Data Exchange (ETDEWEB)

    Bosbach, Johannes; Kuehn, Matthias; Wagner, Claus [German Aerospace Center (DLR), Institute of Aerodynamics and Flow Technology, Goettingen (Germany)

    2009-03-15

    The application of particle image velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of computational fluid dynamics simulations. (orig.)

  10. Large scale particle image velocimetry with helium filled soap bubbles

    Science.gov (United States)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  11. Stereotype Threat, Inquiring about Test Takers' Race and Gender, and Performance on Low-Stakes Tests in a Large-Scale Assessment. Research Report. ETS RR-15-02

    Science.gov (United States)

    Stricker, Lawrence J.; Rock, Donald A.; Bridgeman, Brent

    2015-01-01

    This study explores stereotype threat on low-stakes tests used in a large-scale assessment, math and reading tests in the Education Longitudinal Study of 2002 (ELS). Issues identified in laboratory research (though not observed in studies of high-stakes tests) were assessed: whether inquiring about their race and gender is related to the…

  12. Lasertron performance simulation

    International Nuclear Information System (INIS)

    Dubrovin, A.; Coulon, J.P.

    1987-05-01

    This report presents a comparative simulation study of the Lasertron at different frequency and emission conditions, in view to establish choice criteria for future experiments. The RING program for these simulations is an improved version of the one presented in an other report. The self-consistent treatment of the R.F. extraction zone is added to it, together with the possibility to vary initial conditions to better describe the laser illumination and the electron extraction from cathode. Plane or curved cathodes are used [fr

  13. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  14. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  15. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  16. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  17. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  18. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the co