WorldWideScience

Sample records for large space simulation

  1. Large size space construction for space exploitation

    Science.gov (United States)

    Kondyurin, Alexey

    2016-07-01

    Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).

  2. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  3. Thermography During Thermal Test of the Gaia Deployable Sunshield Assembly Qualification Model in the ESTEC Large Space Simulator

    Science.gov (United States)

    Simpson, R.; Broussely, M.; Edwards, G.; Robinson, D.; Cozzani, A.; Casarosa, G.

    2012-07-01

    The National Physical Laboratory (NPL) and The European Space Research and Technology Centre (ESTEC) have performed for the first time successful surface temperature measurements using infrared thermal imaging in the ESTEC Large Space Simulator (LSS) under vacuum and with the Sun Simulator (SUSI) switched on during thermal qualification tests of the GAIA Deployable Sunshield Assembly (DSA). The thermal imager temperature measurements, with radiosity model corrections, show good agreement with thermocouple readings on well characterised regions of the spacecraft. In addition, the thermal imaging measurements identified potentially misleading thermocouple temperature readings and provided qualitative real-time observations of the thermal and spatial evolution of surface structure changes and heat dissipation during hot test loadings, which may yield additional thermal and physical measurement information through further research.

  4. Time simulation of flutter with large stiffness changes

    Science.gov (United States)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  5. A Steam Jet Plume Simulation in a Large Bulk Space with a System Code MARS

    International Nuclear Information System (INIS)

    Bae, Sung Won; Chung, Bub Dong

    2006-01-01

    From May 2002, the OECD-SETH group has launched the PANDA Project in order to provide an experimental data base for a multi-dimensional code assessment. OECD-SETH group expects the PANDA Project will meet the increasing needs for adequate experimental data for a 3D distribution of relevant variables like the temperature, velocity and steam-air concentrations that are measured with a sufficient resolution and accuracy. The scope of the PANDA Project is the mixture stratification and mixing phenomena in a large bulk space. Total of 24 test series are still being performed in PSI, Switzerland. The PANDA facility consists of 2 main large vessels and 1 connection pipe Within the large vessels, a steam injection nozzle and outlet vent are arranged for each test case. These tests are categorized into 3 modes, i.e. the high momentum, near wall plume, and free plume tests. KAERI has also participated in the SETH group since 1997 so that the multi-dimensional capability of the MARS code could be assessed and developed. Test 17, the high steam jet injection test, has already been simulated by MARS and shows promising results. Now, the test 9 and 9bis cases which use a low speed horizontal steam jet flow have been simulated and investigated

  6. Large-Scale Testing and High-Fidelity Simulation Capabilities at Sandia National Laboratories to Support Space Power and Propulsion

    International Nuclear Information System (INIS)

    Dobranich, Dean; Blanchat, Thomas K.

    2008-01-01

    Sandia National Laboratories, as a Department of Energy, National Nuclear Security Agency, has major responsibility to ensure the safety and security needs of nuclear weapons. As such, with an experienced research staff, Sandia maintains a spectrum of modeling and simulation capabilities integrated with experimental and large-scale test capabilities. This expertise and these capabilities offer considerable resources for addressing issues of interest to the space power and propulsion communities. This paper presents Sandia's capability to perform thermal qualification (analysis, test, modeling and simulation) using a representative weapon system as an example demonstrating the potential to support NASA's Lunar Reactor System

  7. Indoor Climate of Large Glazed Spaces

    DEFF Research Database (Denmark)

    Hendriksen, Ole Juhl; Madsen, Christina E.; Heiselberg, Per

    In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate it is crui...... it is cruicial at the design stage to be able to predict the performance regarding thermal comfort and energy consumption. This paper focus on the practical implementation of Computational Fluid Dynamics (CFD) and the relation to other simulation tools regarding indoor climate.......In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate...

  8. Effects of Turbine Spacings in Very Large Wind Farms

    DEFF Research Database (Denmark)

    farm. LES simulations of large wind farms are performed with full aero-elastic Actuator Lines. The simulations investigate the inherent dynamics inside wind farms in the absence of atmospheric turbulence compared to cases with atmospheric turbulence. Resulting low frequency structures are inherent...... in wind farms for certain turbine spacings and affect both power production and loads...

  9. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    Science.gov (United States)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  10. Laboratory simulation of space plasma phenomena*

    Science.gov (United States)

    Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.

    2017-12-01

    Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.

  11. Thermally Induced Vibrations of the Hubble Space Telescope's Solar Array 3 in a Test Simulated Space Environment

    Science.gov (United States)

    Early, Derrick A.; Haile, William B.; Turczyn, Mark T.; Griffin, Thomas J. (Technical Monitor)

    2001-01-01

    NASA Goddard Space Flight Center and the European Space Agency (ESA) conducted a disturbance verification test on a flight Solar Array 3 (SA3) for the Hubble Space Telescope using the ESA Large Space Simulator (LSS) in Noordwijk, the Netherlands. The LSS cyclically illuminated the SA3 to simulate orbital temperature changes in a vacuum environment. Data acquisition systems measured signals from force transducers and accelerometers resulting from thermally induced vibrations of the SAI The LSS with its seismic mass boundary provided an excellent background environment for this test. This paper discusses the analysis performed on the measured transient SA3 responses and provides a summary of the results.

  12. 26th Space Simulation Conference Proceedings. Environmental Testing: The Path Forward

    Science.gov (United States)

    Packard, Edward A.

    2010-01-01

    Topics covered include: A Multifunctional Space Environment Simulation Facility for Accelerated Spacecraft Materials Testing; Exposure of Spacecraft Surface Coatings in a Simulated GEO Radiation Environment; Gravity-Offloading System for Large-Displacement Ground Testing of Spacecraft Mechanisms; Microscopic Shutters Controlled by cRIO in Sounding Rocket; Application of a Physics-Based Stabilization Criterion to Flight System Thermal Testing; Upgrade of a Thermal Vacuum Chamber for 20 Kelvin Operations; A New Approach to Improve the Uniformity of Solar Simulator; A Perfect Space Simulation Storm; A Planetary Environmental Simulator/Test Facility; Collimation Mirror Segment Refurbishment inside ESA s Large Space; Space Simulation of the CBERS 3 and 4 Satellite Thermal Model in the New Brazilian 6x8m Thermal Vacuum Chamber; The Certification of Environmental Chambers for Testing Flight Hardware; Space Systems Environmental Test Facility Database (SSETFD), Website Development Status; Wallops Flight Facility: Current and Future Test Capabilities for Suborbital and Orbital Projects; Force Limited Vibration Testing of JWST NIRSpec Instrument Using Strain Gages; Investigation of Acoustic Field Uniformity in Direct Field Acoustic Testing; Recent Developments in Direct Field Acoustic Testing; Assembly, Integration and Test Centre in Malaysia: Integration between Building Construction Works and Equipment Installation; Complex Ground Support Equipment for Satellite Thermal Vacuum Test; Effect of Charging Electron Exposure on 1064nm Transmission through Bare Sapphire Optics and SiO2 over HfO2 AR-Coated Sapphire Optics; Environmental Testing Activities and Capabilities for Turkish Space Industry; Integrated Circuit Reliability Simulation in Space Environments; Micrometeoroid Impacts and Optical Scatter in Space Environment; Overcoming Unintended Consequences of Ambient Pressure Thermal Cycling Environmental Tests; Performance and Functionality Improvements to Next Generation

  13. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  14. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  15. Space plasma simulation chamber

    International Nuclear Information System (INIS)

    1986-01-01

    Scientific results of experiments and tests of instruments performed with the Space Plasma Simulation Chamber and its facility are reviewed in the following six categories. 1. Tests of instruments on board rockets, satellites and balloons. 2. Plasma wave experiments. 3. Measurements of plasma particles. 4. Optical measurements. 5. Plasma production. 6. Space plasms simulations. This facility has been managed under Laboratory Space Plasma Comittee since 1969 and used by scientists in cooperative programs with universities and institutes all over country. A list of publications is attached. (author)

  16. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  17. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  18. 25th Space Simulation Conference. Environmental Testing: The Earth-Space Connection

    Science.gov (United States)

    Packard, Edward

    2008-01-01

    Topics covered include: Methods of Helium Injection and Removal for Heat Transfer Augmentation; The ESA Large Space Simulator Mechanical Ground Support Equipment for Spacecraft Testing; Temperature Stability and Control Requirements for Thermal Vacuum/Thermal Balance Testing of the Aquarius Radiometer; The Liquid Nitrogen System for Chamber A: A Change from Original Forced Flow Design to a Natural Flow (Thermo Siphon) System; Return to Mercury: A Comparison of Solar Simulation and Flight Data for the MESSENGER Spacecraft; Floating Pressure Conversion and Equipment Upgrades of Two 3.5kw, 20k, Helium Refrigerators; Affect of Air Leakage into a Thermal-Vacuum Chamber on Helium Refrigeration Heat Load; Special ISO Class 6 Cleanroom for the Lunar Reconnaissance Orbiter (LRO) Project; A State-of-the-Art Contamination Effects Research and Test Facility Martian Dust Simulator; Cleanroom Design Practices and Their Influence on Particle Counts; Extra Terrestrial Environmental Chamber Design; Contamination Sources Effects Analysis (CSEA) - A Tool to Balance Cost/Schedule While Managing Facility Availability; SES and Acoustics at GSFC; HST Super Lightweight Interchangeable Carrier (SLIC) Static Test; Virtual Shaker Testing: Simulation Technology Improves Vibration Test Performance; Estimating Shock Spectra: Extensions beyond GEVS; Structural Dynamic Analysis of a Spacecraft Multi-DOF Shaker Table; Direct Field Acoustic Testing; Manufacture of Cryoshroud Surfaces for Space Simulation Chambers; The New LOTIS Test Facility; Thermal Vacuum Control Systems Options for Test Facilities; Extremely High Vacuum Chamber for Low Outgassing Processing at NASA Goddard; Precision Cleaning - Path to Premier; The New Anechoic Shielded Chambers Designed for Space and Commercial Applications at LIT; Extraction of Thermal Performance Values from Samples in the Lunar Dust Adhesion Bell Jar; Thermal (Silicon Diode) Data Acquisition System; Aquarius's Instrument Science Data System (ISDS) Automated

  19. The Space Station as a Construction Base for Large Space Structures

    Science.gov (United States)

    Gates, R. M.

    1985-01-01

    The feasibility of using the Space Station as a construction site for large space structures is examined. An overview is presented of the results of a program entitled Definition of Technology Development Missions (TDM's) for Early Space Stations - Large Space Structures. The definition of LSS technology development missions must be responsive to the needs of future space missions which require large space structures. Long range plans for space were assembled by reviewing Space System Technology Models (SSTM) and other published sources. Those missions which will use large space structures were reviewed to determine the objectives which must be demonstrated by technology development missions. The three TDM's defined during this study are: (1) a construction storage/hangar facility; (2) a passive microwave radiometer; and (3) a precision optical system.

  20. Modeling and Simulation of DC Power Electronics Systems Using Harmonic State Space (HSS) Method

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth

    2015-01-01

    based on the state-space averaging and generalized averaging, these also have limitations to show the same results as with the non-linear time domain simulations. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling......For the efficiency and simplicity of electric systems, the dc based power electronics systems are widely used in variety applications such as electric vehicles, ships, aircrafts and also in homes. In these systems, there could be a number of dynamic interactions between loads and other dc-dc....... Through this method, the required computation time and CPU memory for large dc power electronics systems can be reduced. Besides, the achieved results show the same results as with the non-linear time domain simulation, but with the faster simulation time which is beneficial in a large network....

  1. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed

    2017-08-28

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  2. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2017-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  3. Laboratory simulation of erosion by space plasma

    International Nuclear Information System (INIS)

    Kristoferson, L.; Fredga, K.

    1976-04-01

    A laboratory experiment has been made where a plasma stream collides with targets made of different materials of cosmic interest. The experiment can be viewed as a process simulation of the solar wind particle interaction with solid surfaces in space, e.g. cometary dust. Special interest is given to sputtering of OH and Na. It is shown that the erosion of solid particles in interplanetary space at large heliocentric distances is most likely dominated by sputtering and by sublimation near the sun. The heliocentric distance of the limit between the two regions is determined mainly by the material properties of the eroded surface, e.g. heat of sublimation and sputtering yield, a typical distance being 0,5 a.u. It is concluded that the observations of Na in comets at large solar distances, in some cases also near the sun, is most likely to be explained by solar wind sputtering. OH emission in space could be of importance also from 'dry', water-free, matter by means of molecule sputtering. The observed OH production rates in comets are however too large to be explained in this way and are certainly the results of sublimation and dissociation of H 2 O from an icy nucleus. (Auth.)

  4. DataSpaces: An Interaction and Coordination Framework for Coupled Simulation Workflows

    International Nuclear Information System (INIS)

    Docan, Ciprian; Klasky, Scott A.; Parashar, Manish

    2010-01-01

    Emerging high-performance distributed computing environments are enabling new end-to-end formulations in science and engineering that involve multiple interacting processes and data-intensive application workflows. For example, current fusion simulation efforts are exploring coupled models and codes that simultaneously simulate separate application processes, such as the core and the edge turbulence, and run on different high performance computing resources. These components need to interact, at runtime, with each other and with services for data monitoring, data analysis and visualization, and data archiving. As a result, they require efficient support for dynamic and flexible couplings and interactions, which remains a challenge. This paper presents Data-Spaces, a flexible interaction and coordination substrate that addresses this challenge. DataSpaces essentially implements a semantically specialized virtual shared space abstraction that can be associatively accessed by all components and services in the application workflow. It enables live data to be extracted from running simulation components, indexes this data online, and then allows it to be monitored, queried and accessed by other components and services via the space using semantically meaningful operators. The underlying data transport is asynchronous, low-overhead and largely memory-to-memory. The design, implementation, and experimental evaluation of DataSpaces using a coupled fusion simulation workflow is presented.

  5. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  6. Large-eddy simulation of maritime deep tropical convection

    Directory of Open Access Journals (Sweden)

    Peter A Bogenschutz

    2009-12-01

    Full Text Available This study represents an attempt to apply Large-Eddy Simulation (LES resolution to simulate deep tropical convection in near equilibrium for 24 hours over an area of about 205 x 205 km2, which is comparable to that of a typical horizontal grid cell in a global climate model. The simulation is driven by large-scale thermodynamic tendencies derived from mean conditions during the GATE Phase III field experiment. The LES uses 2048 x 2048 x 256 grid points with horizontal grid spacing of 100 m and vertical grid spacing ranging from 50 m in the boundary layer to 100 m in the free troposphere. The simulation reaches a near equilibrium deep convection regime in 12 hours. The simulated vertical cloud distribution exhibits a trimodal vertical distribution of deep, middle and shallow clouds similar to that often observed in Tropics. A sensitivity experiment in which cold pools are suppressed by switching off the evaporation of precipitation results in much lower amounts of shallow and congestus clouds. Unlike the benchmark LES where the new deep clouds tend to appear along the edges of spreading cold pools, the deep clouds in the no-cold-pool experiment tend to reappear at the sites of the previous deep clouds and tend to be surrounded by extensive areas of sporadic shallow clouds. The vertical velocity statistics of updraft and downdraft cores below 6 km height are compared to aircraft observations made during GATE. The comparison shows generally good agreement, and strongly suggests that the LES simulation can be used as a benchmark to represent the dynamics of tropical deep convection on scales ranging from large turbulent eddies to mesoscale convective systems. The effect of horizontal grid resolution is examined by running the same case with progressively larger grid sizes of 200, 400, 800, and 1600 m. These runs show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition

  7. Tool Support for Parametric Analysis of Large Software Simulation Systems

    Science.gov (United States)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  8. Precision Optical Coatings for Large Space Telescope Mirrors

    Science.gov (United States)

    Sheikh, David

    This proposal “Precision Optical Coatings for Large Space Telescope Mirrors” addresses the need to develop and advance the state-of-the-art in optical coating technology. NASA is considering large monolithic mirrors 1 to 8-meters in diameter for future telescopes such as HabEx and LUVOIR. Improved large area coating processes are needed to meet the future requirements of large astronomical mirrors. In this project, we will demonstrate a broadband reflective coating process for achieving high reflectivity from 90-nm to 2500-nm over a 2.3-meter diameter coating area. The coating process is scalable to larger mirrors, 6+ meters in diameter. We will use a battery-driven coating process to make an aluminum reflector, and a motion-controlled coating technology for depositing protective layers. We will advance the state-of-the-art for coating technology and manufacturing infrastructure, to meet the reflectance and wavefront requirements of both HabEx and LUVOIR. Specifically, we will combine the broadband reflective coating designs and processes developed at GSFC and JPL with large area manufacturing technologies developed at ZeCoat Corporation. Our primary objectives are to: Demonstrate an aluminum coating process to create uniform coatings over large areas with near-theoretical aluminum reflectance Demonstrate a motion-controlled coating process to apply very precise 2-nm to 5- nm thick protective/interference layers to large areas, Demonstrate a broadband coating system (90-nm to 2500-nm) over a 2.3-meter coating area and test it against the current coating specifications for LUVOIR/HabEx. We will perform simulated space-environment testing, and we expect to advance the TRL from 3 to >5 in 3-years.

  9. Space robot simulator vehicle

    Science.gov (United States)

    Cannon, R. H., Jr.; Alexander, H.

    1985-01-01

    A Space Robot Simulator Vehicle (SRSV) was constructed to model a free-flying robot capable of doing construction, manipulation and repair work in space. The SRSV is intended as a test bed for development of dynamic and static control methods for space robots. The vehicle is built around a two-foot-diameter air-cushion vehicle that carries batteries, power supplies, gas tanks, computer, reaction jets and radio equipment. It is fitted with one or two two-link manipulators, which may be of many possible designs, including flexible-link versions. Both the vehicle body and its first arm are nearly complete. Inverse dynamic control of the robot's manipulator has been successfully simulated using equations generated by the dynamic simulation package SDEXACT. In this mode, the position of the manipulator tip is controlled not by fixing the vehicle base through thruster operation, but by controlling the manipulator joint torques to achieve the desired tip motion, while allowing for the free motion of the vehicle base. One of the primary goals is to minimize use of the thrusters in favor of intelligent control of the manipulator. Ways to reduce the computational burden of control are described.

  10. Heating of large format filters in sub-mm and fir space optics

    Science.gov (United States)

    Baccichet, N.; Savini, G.

    2017-11-01

    Most FIR and sub-mm space borne observatories use polymer-based quasi-optical elements like filters and lenses, due to their high transparency and low absorption in such wavelength ranges. Nevertheless, data from those missions have proven that thermal imbalances in the instrument (not caused by filters) can complicate the data analysis. Consequently, for future, higher precision instrumentation, further investigation is required on any thermal imbalances embedded in such polymer-based filters. Particularly, in this paper the heating of polymers when operating at cryogenic temperature in space will be studied. Such phenomenon is an important aspect of their functioning since the transient emission of unwanted thermal radiation may affect the scientific measurements. To assess this effect, a computer model was developed for polypropylene based filters and PTFE-based coatings. Specifically, a theoretical model of their thermal properties was created and used into a multi-physics simulation that accounts for conductive and radiative heating effects of large optical elements, the geometry of which was suggested by the large format array instruments designed for future space missions. It was found that in the simulated conditions, the filters temperature was characterized by a time-dependent behaviour, modulated by a small scale fluctuation. Moreover, it was noticed that thermalization was reached only when a low power input was present.

  11. Large-signal, dynamic simulation of the slowpoke-3 nuclear heating reactor

    International Nuclear Information System (INIS)

    Tseng, C.M.; Lepp, R.M.

    1983-07-01

    A 2 MWt nuclear reactor, called SLOWPOKE-3, is being developed at the Chalk River Nuclear Laboratories (CRNL). This reactor, which is cooled by natural circulation, is designed to produce hot water for commercial space heating and perhaps generate some electricity in remote locations where the costs of alternate forms of energy are high. A large-signal, dynamic simulation of this reactor, without closed-loop control, was developed and implemented on a hybrid computer, using the basic equations of conservation of mass, energy and momentum. The natural circulation of downcomer flow in the pool was simulated using a special filter, capable of modelling various flow conditions. The simulation was then used to study the intermediate and long-term transient response of SLOWPOKE-3 to large disturbances, such as loss of heat sink, loss of regulation, daily load following, and overcooling of the reactor coolant. Results of the simulation show that none of these disturbances produce hazardous transients

  12. Development of space simulation / net-laboratory system

    Science.gov (United States)

    Usui, H.; Matsumoto, H.; Ogino, T.; Fujimoto, M.; Omura, Y.; Okada, M.; Ueda, H. O.; Murata, T.; Kamide, Y.; Shinagawa, H.; Watanabe, S.; Machida, S.; Hada, T.

    A research project for the development of space simulation / net-laboratory system was approved by Japan Science and Technology Corporation (JST) in the category of Research and Development for Applying Advanced Computational Science and Technology(ACT-JST) in 2000. This research project, which continues for three years, is a collaboration with an astrophysical simulation group as well as other space simulation groups which use MHD and hybrid models. In this project, we develop a proto type of unique simulation system which enables us to perform simulation runs by providing or selecting plasma parameters through Web-based interface on the internet. We are also developing an on-line database system for space simulation from which we will be able to search and extract various information such as simulation method and program, manuals, and typical simulation results in graphic or ascii format. This unique system will help the simulation beginners to start simulation study without much difficulty or effort, and contribute to the promotion of simulation studies in the STP field. In this presentation, we will report the overview and the current status of the project.

  13. Large Atmospheric Computation on the Earth Simulator: The LACES Project

    Directory of Open Access Journals (Sweden)

    Michel Desgagné

    2006-01-01

    Full Text Available The Large Atmospheric Computation on the Earth Simulator (LACES project is a joint initiative between Canadian and Japanese meteorological services and academic institutions that focuses on the high resolution simulation of Hurricane Earl (1998. The unique aspect of this effort is the extent of the computational domain, which covers all of North America and Europe with a grid spacing of 1 km. The Canadian Mesoscale Compressible Community (MC2 model is shown to parallelize effectively on the Japanese Earth Simulator (ES supercomputer; however, even using the extensive computing resources of the ES Center (ESC, the full simulation for the majority of Hurricane Earl's lifecycle takes over eight days to perform and produces over 5.2 TB of raw data. Preliminary diagnostics show that the results of the LACES simulation for the tropical stage of Hurricane Earl's lifecycle compare well with available observations for the storm. Further studies involving advanced diagnostics have commenced, taking advantage of the uniquely large spatial extent of the high resolution LACES simulation to investigate multiscale interactions in the hurricane and its environment. It is hoped that these studies will enhance our understanding of processes occurring within the hurricane and between the hurricane and its planetary-scale environment.

  14. On the possibility of large axion moduli spaces

    Energy Technology Data Exchange (ETDEWEB)

    Rudelius, Tom [Jefferson Physical Laboratory, Harvard University,Cambridge, MA 02138 (United States)

    2015-04-28

    We study the diameters of axion moduli spaces, focusing primarily on type IIB compactifications on Calabi-Yau three-folds. In this case, we derive a stringent bound on the diameter in the large volume region of parameter space for Calabi-Yaus with simplicial Kähler cone. This bound can be violated by Calabi-Yaus with non-simplicial Kähler cones, but additional contributions are introduced to the effective action which can restrict the field range accessible to the axions. We perform a statistical analysis of simulated moduli spaces, finding in all cases that these additional contributions restrict the diameter so that these moduli spaces are no more likely to yield successful inflation than those with simplicial Kähler cone or with far fewer axions. Further heuristic arguments for axions in other corners of the duality web suggest that the difficulty observed in http://dx.doi.org/10.1088/1475-7516/2003/06/001 of finding an axion decay constant parametrically larger than M{sub p} applies not only to individual axions, but to the diagonals of axion moduli space as well. This observation is shown to follow from the weak gravity conjecture of http://dx.doi.org/10.1088/1126-6708/2007/06/060, so it likely applies not only to axions in string theory, but also to axions in any consistent theory of quantum gravity.

  15. Planetary and Space Simulation Facilities (PSI) at DLR

    Science.gov (United States)

    Panitz, Corinna; Rabbow, E.; Rettberg, P.; Kloss, M.; Reitz, G.; Horneck, G.

    2010-05-01

    The Planetary and Space Simulation facilities at DLR offer the possibility to expose biological and physical samples individually or integrated into space hardware to defined and controlled space conditions like ultra high vacuum, low temperature and extraterrestrial UV radiation. An x-ray facility stands for the simulation of the ionizing component at the disposal. All of the simulation facilities are required for the preparation of space experiments: - for testing of the newly developed space hardware - for investigating the effect of different space parameters on biological systems as a preparation for the flight experiment - for performing the 'Experiment Verification Tests' (EVT) for the specification of the test parameters - and 'Experiment Sequence Tests' (EST) by simulating sample assemblies, exposure to selected space parameters, and sample disassembly. To test the compatibility of the different biological and chemical systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed among many others for the ESA facilities of the ongoing missions EXPOSE-R and EXPOSE-E on board of the International Space Station ISS . Several experiment verification tests EVTs and an experiment sequence test EST have been conducted in the carefully equipped and monitored planetary and space simulation facilities PSI of the Institute of Aerospace Medicine at DLR in Cologne, Germany. These ground based pre-flight studies allowed the investigation of a much wider variety of samples and the selection of the most promising organisms for the flight experiment. EXPOSE-E had been attached to the outer balcony of the European Columbus module of the ISS in February 2008 and stayed for 1,5 years in space; EXPOSE-R has been attached to the Russian Svezda module of the ISS in spring 2009 and mission duration will be approx. 1,5 years. The missions will give new insights into the survivability of terrestrial

  16. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  17. Scalable space-time adaptive simulation tools for computational electrocardiology

    OpenAIRE

    Krause, Dorian; Krause, Rolf

    2013-01-01

    This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...

  18. Visualization of the Flux Rope Generation Process Using Large Quantities of MHD Simulation Data

    Directory of Open Access Journals (Sweden)

    Y Kubota

    2013-03-01

    Full Text Available We present a new concept of analysis using visualization of large quantities of simulation data. The time development of 3D objects with high temporal resolution provides the opportunity for scientific discovery. We visualize large quantities of simulation data using the visualization application 'Virtual Aurora' based on AVS (Advanced Visual Systems and the parallel distributed processing at "Space Weather Cloud" in NICT based on Gfarm technology. We introduce two results of high temporal resolution visualization: the magnetic flux rope generation process and dayside reconnection using a system of magnetic field line tracing.

  19. Integrated visualization of simulation results and experimental devices in virtual-reality space

    International Nuclear Information System (INIS)

    Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi

    2011-01-01

    We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)

  20. Large-Eddy Simulation of Internal Flow through Human Vocal Folds

    Science.gov (United States)

    Lasota, Martin; Šidlof, Petr

    2018-06-01

    The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.

  1. Large-Eddy Simulation of Internal Flow through Human Vocal Folds

    Directory of Open Access Journals (Sweden)

    Lasota Martin

    2018-01-01

    Full Text Available The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.

  2. Status Report of Simulated Space Radiation Environment Facility

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Phil Hyun; Nho, Young Chang; Jeun, Joon Pyo; Choi, Jae Hak; Lim, Youn Mook; Jung, Chan Hee; Jeon, Young Kyu

    2007-11-15

    The technology for performance testing and improvement of materials which are durable at space environment is a military related technology and veiled and securely regulated in advanced countries such as US and Russia. This core technology cannot be easily transferred to other country too. Therefore, this technology is the most fundamental and necessary research area for the successful establishment of space environment system. Since the task for evaluating the effects of space materials and components by space radiation plays important role in satellite lifetime extension and running failure percentage decrease, it is necessary to establish simulated space radiation facility and systematic testing procedure. This report has dealt with the status of the technology to enable the simulation of space environment effects, including the effect of space radiation on space materials. This information such as the fundamental knowledge of space environment and research status of various countries as to the simulation of space environment effects of space materials will be useful for the research on radiation hardiness of the materials. Furthermore, it will be helpful for developer of space material on deriving a better choice of materials, reducing the design cycle time, and improving safety.

  3. Status Report of Simulated Space Radiation Environment Facility

    International Nuclear Information System (INIS)

    Kang, Phil Hyun; Nho, Young Chang; Jeun, Joon Pyo; Choi, Jae Hak; Lim, Youn Mook; Jung, Chan Hee; Jeon, Young Kyu

    2007-11-01

    The technology for performance testing and improvement of materials which are durable at space environment is a military related technology and veiled and securely regulated in advanced countries such as US and Russia. This core technology cannot be easily transferred to other country too. Therefore, this technology is the most fundamental and necessary research area for the successful establishment of space environment system. Since the task for evaluating the effects of space materials and components by space radiation plays important role in satellite lifetime extension and running failure percentage decrease, it is necessary to establish simulated space radiation facility and systematic testing procedure. This report has dealt with the status of the technology to enable the simulation of space environment effects, including the effect of space radiation on space materials. This information such as the fundamental knowledge of space environment and research status of various countries as to the simulation of space environment effects of space materials will be useful for the research on radiation hardiness of the materials. Furthermore, it will be helpful for developer of space material on deriving a better choice of materials, reducing the design cycle time, and improving safety

  4. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  5. Large eddy simulations of compressible magnetohydrodynamic turbulence

    International Nuclear Information System (INIS)

    Grete, Philipp

    2016-01-01

    subsonic (sonic Mach number M s ∼0.2) to the highly supersonic (M s ∼20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ∼3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.

  6. Large eddy simulations of compressible magnetohydrodynamic turbulence

    Science.gov (United States)

    Grete, Philipp

    2017-02-01

    subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.

  7. Optimal control of large space structures via generalized inverse matrix

    Science.gov (United States)

    Nguyen, Charles C.; Fang, Xiaowen

    1987-01-01

    Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.

  8. Space-time relationship in continuously moving table method for large FOV peripheral contrast-enhanced magnetic resonance angiography

    International Nuclear Information System (INIS)

    Sabati, M; Lauzon, M L; Frayne, R

    2003-01-01

    Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters

  9. Simulation of Martian surface-atmosphere interaction in a space-simulator: Technical considerations and feasibility

    Science.gov (United States)

    Moehlmann, D.; Kochan, H.

    1992-01-01

    The Space Simulator of the German Aerospace Research Establishment at Cologne, formerly used for testing satellites, is now, since 1987, the central unit within the research sub-program 'Comet-Simulation' (KOSI). The KOSI team has investigated physical processes relevant to comets and their surfaces. As a byproduct we gained experience in sample-handling under simulated space conditions. In broadening the scope of the research activities of the DLR Institute of Space Simulation an extension to 'Laboratory-Planetology' is planned. Following the KOSI-experiments a Mars Surface-Simulation with realistic minerals and surface soil in a suited environment (temperature, pressure, and CO2-atmosphere) is foreseen as the next step. Here, our main interest is centered on thermophysical properties of the Martian surface and energy transport (and related gas transport) through the surface. These laboratory simulation activities can be related to space missions as typical pre-mission and during-the-mission support of the experiments design and operations (simulation in parallel). Post mission experiments for confirmation and interpretation of results are of great value. The physical dimensions of the Space Simulator (cylinder of about 2.5 m diameter and 5 m length) allows for testing and qualification of experimental hardware under realistic Martian conditions.

  10. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    Science.gov (United States)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  11. Large Scale Beam-beam Simulations for the CERN LHC using Distributed Computing

    CERN Document Server

    Herr, Werner; McIntosh, E; Schmidt, F

    2006-01-01

    We report on a large scale simulation of beam-beam effects for the CERN Large Hadron Collider (LHC). The stability of particles which experience head-on and long-range beam-beam effects was investigated for different optical configurations and machine imperfections. To cover the interesting parameter space required computing resources not available at CERN. The necessary resources were available in the LHC@home project, based on the BOINC platform. At present, this project makes more than 60000 hosts available for distributed computing. We shall discuss our experience using this system during a simulation campaign of more than six months and describe the tools and procedures necessary to ensure consistent results. The results from this extended study are presented and future plans are discussed.

  12. Environmental effects and large space systems

    Science.gov (United States)

    Garrett, H. B.

    1981-01-01

    When planning large scale operations in space, environmental impact must be considered in addition to radiation, spacecraft charging, contamination, high power and size. Pollution of the atmosphere and space is caused by rocket effluents and by photoelectrons generated by sunlight falling on satellite surfaces even light pollution may result (the SPS may reflect so much light as to be a nuisance to astronomers). Large (100 Km 2) structures also will absorb the high energy particles that impinge on them. Altogether, these effects may drastically alter the Earth's magnetosphere. It is not clear if these alterations will in any way affect the Earth's surface climate. Large structures will also generate large plasma wakes and waves which may cause interference with communications to the vehicle. A high energy, microwave beam from the SPS will cause ionospheric turbulence, affecting UHF and VHF communications. Although none of these effects may ultimately prove critical, they must be considered in the design of large structures.

  13. Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales

    Data.gov (United States)

    National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...

  14. Mechanical Behavior Analysis of Y-Type S-SRC Column in a Large-Space Vertical Hybrid Structure Using Local Fine Numerical Simulation Method

    Directory of Open Access Journals (Sweden)

    Jianguang Yue

    2018-01-01

    Full Text Available In a large spatial structure, normally the important members are of special type and are the safety key for the global structure. In order to study the mechanical behavior details of the local member, it is difficult for the common test method to realize the complex spatial loading state of the local member. Therefore, a local-fine finite element model was proposed and a large-space vertical hybrid structure was numerically simulated. The seismic responses of the global structure and the Y-type S-SRC column were analyzed under El Centro seismic motions with the peak acceleration of 35 gal and 220 gal. The numerical model was verified with the results of the seismic shaking table test of the structure model. The failure mechanism and stiffness damage evolution of the Y-type S-SRC column were analyzed. The calculated results agreed well with the test results. It indicates that the local-fine FEM could reflect the mechanical details of the local members in a large spatial structure.

  15. Large eddy simulations of compressible magnetohydrodynamic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Grete, Philipp

    2016-09-09

    subsonic (sonic Mach number M{sub s}∼0.2) to the highly supersonic (M{sub s}∼20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M{sub s}∼3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.

  16. Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations

    Science.gov (United States)

    Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.

    2013-12-01

    There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.

  17. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  18. Investigation of Secondary Neutron Production in Large Space Vehicles for Deep Space

    Science.gov (United States)

    Rojdev, Kristina; Koontz, Steve; Reddell, Brandon; Atwell, William; Boeder, Paul

    2016-01-01

    Future NASA missions will focus on deep space and Mars surface operations with large structures necessary for transportation of crew and cargo. In addition to the challenges of manufacturing these large structures, there are added challenges from the space radiation environment and its impacts on the crew, electronics, and vehicle materials. Primary radiation from the sun (solar particle events) and from outside the solar system (galactic cosmic rays) interact with materials of the vehicle and the elements inside the vehicle. These interactions lead to the primary radiation being absorbed or producing secondary radiation (primarily neutrons). With all vehicles, the high-energy primary radiation is of most concern. However, with larger vehicles, there is more opportunity for secondary radiation production, which can be significant enough to cause concern. In a previous paper, we embarked upon our first steps toward studying neutron production from large vehicles by validating our radiation transport codes for neutron environments against flight data. The following paper will extend the previous work to focus on the deep space environment and the resulting neutron flux from large vehicles in this deep space environment.

  19. Analysis of the Thermo-Elastic Response of Space Reflectors to Simulated Space Environment

    Science.gov (United States)

    Allegri, G.; Ivagnes, M. M.; Marchetti, M.; Poscente, F.

    2002-01-01

    The evaluation of space environment effects on materials and structures is a key matter to develop a proper design of long duration missions: since a large part of satellites operating in the earth orbital environment are employed for telecommunications, the development of space antennas and reflectors featured by high dimensional stability versus space environment interactions represents a major challenge for designers. The structural layout of state of the art space antennas and reflectors is very complex, since several different sensible elements and materials are employed: particular care must be placed in evaluating the actual geometrical configuration of the reflectors operating in the space environment, since very limited distortions of the designed layout can produce severe effects on the quality of the signal both received and transmitted, especially for antennas operating at high frequencies. The effects of thermal loads due to direct sunlight exposition and to earth and moon albedo can be easily taken into account employing the standard methods of structural analysis: on the other hand the thermal cycling and the exposition to the vacuum environment produce a long term damage accumulation which affects the whole structure. The typical effects of the just mentioned exposition are the outgassing of polymeric materials and the contamination of the exposed surface, which can affect sensibly the thermo-mechanical properties of the materials themselves and, therefore, the structural global response. The main aim of the present paper is to evaluate the synergistic effects of thermal cycling and of the exposition to high vacuum environment on an innovative antenna developed by Alenia Spazio S.p.a.: to this purpose, both an experimental and numerical research activity has been developed. A complete prototype of the antenna has been exposed to the space environment simulated by the SAS facility: this latter is constituted by an high vacuum chamber, equipped by

  20. An FPGA computing demo core for space charge simulation

    International Nuclear Information System (INIS)

    Wu, Jinyuan; Huang, Yifei

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computed using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.

  1. An FPGA computing demo core for space charge simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computed using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.

  2. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  3. Laboratory Spectroscopy of Large Carbon Molecules and Ions in Support of Space Missions. A New Generation of Laboratory & Space Studies

    Science.gov (United States)

    Salama, Farid; Tan, Xiaofeng; Cami, Jan; Biennier, Ludovic; Remy, Jerome

    2006-01-01

    Polycyclic Aromatic Hydrocarbons (PAHs) are an important and ubiquitous component of carbon-bearing materials in space. A long-standing and major challenge for laboratory astrophysics has been to measure the spectra of large carbon molecules in laboratory environments that mimic (in a realistic way) the physical conditions that are associated with the interstellar emission and absorption regions [1]. This objective has been identified as one of the critical Laboratory Astrophysics objectives to optimize the data return from space missions [2]. An extensive laboratory program has been developed to assess the properties of PAHs in such environments and to describe how they influence the radiation and energy balance in space. We present and discuss the gas-phase electronic absorption spectra of neutral and ionized PAHs measured in the UV-Visible-NIR range in astrophysically relevant environments and discuss the implications for astrophysics [1]. The harsh physical conditions of the interstellar medium characterized by a low temperature, an absence of collisions and strong VUV radiation fields - have been simulated in the laboratory by associating a pulsed cavity ringdown spectrometer (CRDS) with a supersonic slit jet seeded with PAHs and an ionizing, penning-type, electronic discharge. We have measured for the {\\it first time} the spectra of a series of neutral [3,4] and ionized [5,6] interstellar PAHs analogs in the laboratory. An effort has also been attempted to quantify the mechanisms of ion and carbon nanoparticles production in the free jet expansion and to model our simulation of the diffuse interstellar medium in the laboratory [7]. These experiments provide {\\it unique} information on the spectra of free, large carbon-containing molecules and ions in the gas phase. We are now, for the first time, in the position to directly compare laboratory spectral data on free, cold, PAH ions and carbon nano-sized carbon particles with astronomical observations in the

  4. Large eddy simulation of bundle turbulent flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1995-01-01

    Large eddy simulation may be defined as simulation of a turbulent flow in which the large scale motions are explicitly resolved while the small scale motions are modeled. This results into a system of equations that require closure models. The closure models relate the effects of the small scale motions onto the large scale motions. There have been several models developed, the most popular is the Smagorinsky eddy viscosity model. A new model has recently been introduced by Lee that modified the Smagorinsky model. Using both of the above mentioned closure models, two different geometric arrangements were used in the simulation of turbulent cross flow within rigid tube bundles. An inlined array simulations was performed for a deep bundle (10,816 nodes) as well as an inlet/outlet simulation (57,600 nodes). Comparisons were made to available experimental data. Flow visualization enabled the distinction of different characteristics within the flow such as jet switching effects in the wake of the bundle flow for the inlet/outlet simulation case, as well as within tube bundles. The results indicate that the large eddy simulation technique is capable of turbulence prediction and may be used as a viable engineering tool with the careful consideration of the subgrid scale model. (author)

  5. Potential large missions enabled by NASA's space launch system

    Science.gov (United States)

    Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David A.; Jackman, Angela; Warfield, Keith R.

    2016-07-01

    Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.

  6. Galactic cosmic ray simulation at the NASA Space Radiation Laboratory

    Science.gov (United States)

    Norbury, John W.; Schimmerling, Walter; Slaba, Tony C.; Azzam, Edouard I.; Badavi, Francis F.; Baiocco, Giorgio; Benton, Eric; Bindi, Veronica; Blakely, Eleanor A.; Blattnig, Steve R.; Boothman, David A.; Borak, Thomas B.; Britten, Richard A.; Curtis, Stan; Dingfelder, Michael; Durante, Marco; Dynan, William S.; Eisch, Amelia J.; Elgart, S. Robin; Goodhead, Dudley T.; Guida, Peter M.; Heilbronn, Lawrence H.; Hellweg, Christine E.; Huff, Janice L.; Kronenberg, Amy; La Tessa, Chiara; Lowenstein, Derek I.; Miller, Jack; Morita, Takashi; Narici, Livio; Nelson, Gregory A.; Norman, Ryan B.; Ottolenghi, Andrea; Patel, Zarana S.; Reitz, Guenther; Rusek, Adam; Schreurs, Ann-Sofie; Scott-Carnell, Lisa A.; Semones, Edward; Shay, Jerry W.; Shurshakov, Vyacheslav A.; Sihver, Lembit; Simonsen, Lisa C.; Story, Michael D.; Turker, Mitchell S.; Uchihori, Yukio; Williams, Jacqueline; Zeitlin, Cary J.

    2017-01-01

    Most accelerator-based space radiation experiments have been performed with single ion beams at fixed energies. However, the space radiation environment consists of a wide variety of ion species with a continuous range of energies. Due to recent developments in beam switching technology implemented at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL), it is now possible to rapidly switch ion species and energies, allowing for the possibility to more realistically simulate the actual radiation environment found in space. The present paper discusses a variety of issues related to implementation of galactic cosmic ray (GCR) simulation at NSRL, especially for experiments in radiobiology. Advantages and disadvantages of different approaches to developing a GCR simulator are presented. In addition, issues common to both GCR simulation and single beam experiments are compared to issues unique to GCR simulation studies. A set of conclusions is presented as well as a discussion of the technical implementation of GCR simulation. PMID:26948012

  7. Definition of technology development missions for early space stations. Large space structures, phase 2, midterm review

    Science.gov (United States)

    1984-01-01

    The large space structures technology development missions to be performed on an early manned space station was studied and defined and the resources needed and the design implications to an early space station to carry out these large space structures technology development missions were determined. Emphasis is being placed on more detail in mission designs and space station resource requirements.

  8. Solid State Large Area Pulsed Solar Simulator for 3-, 4- and 6-Junction Solar Cell Arrays, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The Phase I was successful in delivering a complete prototype of the proposed innovation, an LED-based, solid state, large area, pulsed, solar simulator (ssLAPSS)....

  9. An optimal beam alignment method for large-scale distributed space surveillance radar system

    Science.gov (United States)

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  10. Monte Carlo simulation of continuous-space crystal growth

    International Nuclear Information System (INIS)

    Dodson, B.W.; Taylor, P.A.

    1986-01-01

    We describe a method, based on Monte Carlo techniques, of simulating the atomic growth of crystals without the discrete lattice space assumed by conventional Monte Carlo growth simulations. Since no lattice space is assumed, problems involving epitaxial growth, heteroepitaxy, phonon-driven mechanisms, surface reconstruction, and many other phenomena incompatible with the lattice-space approximation can be studied. Also, use of the Monte Carlo method circumvents to some extent the extreme limitations on simulated timescale inherent in crystal-growth techniques which might be proposed using molecular dynamics. The implementation of the new method is illustrated by studying the growth of strained-layer superlattice (SLS) interfaces in two-dimensional Lennard-Jones atomic systems. Despite the extreme simplicity of such systems, the qualitative features of SLS growth seen here are similar to those observed experimentally in real semiconductor systems

  11. Navigation simulator for the Space Tug vehicle

    Science.gov (United States)

    Colburn, B. K.; Boland, J. S., III; Peters, E. G.

    1977-01-01

    A general simulation program (GSP) for state estimation of a nonlinear space vehicle flight navigation system is developed and used as a basis for evaluating the performance of a Space Tug navigation system. An explanation of the iterative guidance mode (IGM) guidance law, derivation of the dynamics, coordinate frames and state estimation routines are given in order to clarify the assumptions and approximations made. A number of simulation and analytical studies are used to demonstrate the operation of the Tug system. Included in the simulation studies are (1) initial offset vector parameter study; (2) propagation time vs accuracy; (3) measurement noise parametric study and (4) reduction in computational burden of an on-board implementable scheme. From the results of these studies, conclusions and recommendations concerning future areas of practical and theoretical work are presented.

  12. Environmental Disturbance Modeling for Large Inflatable Space Structures

    National Research Council Canada - National Science Library

    Davis, Donald

    2001-01-01

    Tightening space budgets and stagnating spacelift capabilities are driving the Air Force and other space agencies to focus on inflatable technology as a reliable, inexpensive means of deploying large structures in orbit...

  13. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  14. Extra-large letter spacing improves reading in dyslexia

    Science.gov (United States)

    Zorzi, Marco; Barbiero, Chiara; Facoetti, Andrea; Lonciari, Isabella; Carrozzi, Marco; Montico, Marcella; Bravar, Laura; George, Florence; Pech-Georgel, Catherine; Ziegler, Johannes C.

    2012-01-01

    Although the causes of dyslexia are still debated, all researchers agree that the main challenge is to find ways that allow a child with dyslexia to read more words in less time, because reading more is undisputedly the most efficient intervention for dyslexia. Sophisticated training programs exist, but they typically target the component skills of reading, such as phonological awareness. After the component skills have improved, the main challenge remains (that is, reading deficits must be treated by reading more—a vicious circle for a dyslexic child). Here, we show that a simple manipulation of letter spacing substantially improved text reading performance on the fly (without any training) in a large, unselected sample of Italian and French dyslexic children. Extra-large letter spacing helps reading, because dyslexics are abnormally affected by crowding, a perceptual phenomenon with detrimental effects on letter recognition that is modulated by the spacing between letters. Extra-large letter spacing may help to break the vicious circle by rendering the reading material more easily accessible. PMID:22665803

  15. Simulation of space charge effects in a synchrotron

    International Nuclear Information System (INIS)

    Machida, Shinji; Ikegami, Masanori

    1998-01-01

    We have studied space charge effects in a synchrotron with multi-particle tracking in 2-D and 3-D configuration space (4-D and 6-D phase space, respectively). First, we will describe the modelling of space charge fields in the simulation and a procedure of tracking. Several ways of presenting tracking results will be also mentioned. Secondly, it is discussed as a demonstration of the simulation study that coherent modes of a beam play a major role in beam stability and intensity limit. The incoherent tune in a resonance condition should be replaced by the coherent tune. Finally, we consider the coherent motion of a beam core as a driving force of halo formation. The mechanism is familiar in linac, and we apply it in a synchrotron

  16. Density-functional theory simulation of large quantum dots

    Science.gov (United States)

    Jiang, Hong; Baranger, Harold U.; Yang, Weitao

    2003-10-01

    Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.

  17. Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach

    Science.gov (United States)

    Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan

    2015-01-01

    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space

  18. WENESSA, Wide Eye-Narrow Eye Space Simulation fo Situational Awareness

    Science.gov (United States)

    Albarait, O.; Payne, D. M.; LeVan, P. D.; Luu, K. K.; Spillar, E.; Freiwald, W.; Hamada, K.; Houchard, J.

    In an effort to achieve timelier indications of anomalous object behaviors in geosynchronous earth orbit, a Planning Capability Concept (PCC) for a “Wide Eye-Narrow Eye” (WE-NE) telescope network has been established. The PCC addresses the problem of providing continuous and operationally robust, layered and cost-effective, Space Situational Awareness (SSA) that is focused on monitoring deep space for anomalous behaviors. It does this by first detecting the anomalies with wide field of regard systems, and then providing reliable handovers for detailed observational follow-up by another optical asset. WENESSA will explore the added value of such a system to the existing Space Surveillance Network (SSN). The study will assess and quantify the degree to which the PCC completely fulfills, or improves or augments, these deep space knowledge deficiencies relative to current operational systems. In order to improve organic simulation capabilities, we will explore options for the federation of diverse community simulation approaches, while evaluating the efficiencies offered by a network of small and larger aperture, ground-based telescopes. Existing Space Modeling and Simulation (M&S) tools designed for evaluating WENESSA-like problems will be taken into consideration as we proceed in defining and developing the tools needed to perform this study, leading to the creation of a unified Space M&S environment for the rapid assessment of new capabilities. The primary goal of this effort is to perform a utility assessment of the WE-NE concept. The assessment will explore the mission utility of various WE-NE concepts in discovering deep space anomalies in concert with the SSN. The secondary goal is to generate an enduring modeling and simulation environment to explore the utility of future proposed concepts and supporting technologies. Ultimately, our validated simulation framework would support the inclusion of other ground- and space-based SSA assets through integrated

  19. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  20. Manufacturing Process Simulation of Large-Scale Cryotanks

    Science.gov (United States)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  1. Large Eddy Simulations using oodlesDST

    Science.gov (United States)

    2016-01-01

    Research Agency DST-Group-TR-3205 ABSTRACT The oodlesDST code is based on OpenFOAM software and performs Large Eddy Simulations of......maritime platforms using a variety of simulation techniques. He is currently using OpenFOAM software to perform both Reynolds Averaged Navier-Stokes

  2. Potential Large Decadal Missions Enabled by Nasas Space Launch System

    Science.gov (United States)

    Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David Alan; Jackman, Angela; Warfield, Keith R.

    2016-01-01

    Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.

  3. Quality and Reliability of Large-Eddy Simulations

    CERN Document Server

    Meyers, Johan; Sagaut, Pierre

    2008-01-01

    Computational resources have developed to the level that, for the first time, it is becoming possible to apply large-eddy simulation (LES) to turbulent flow problems of realistic complexity. Many examples can be found in technology and in a variety of natural flows. This puts issues related to assessing, assuring, and predicting the quality of LES into the spotlight. Several LES studies have been published in the past, demonstrating a high level of accuracy with which turbulent flow predictions can be attained, without having to resort to the excessive requirements on computational resources imposed by direct numerical simulations. However, the setup and use of turbulent flow simulations requires a profound knowledge of fluid mechanics, numerical techniques, and the application under consideration. The susceptibility of large-eddy simulations to errors in modelling, in numerics, and in the treatment of boundary conditions, can be quite large due to nonlinear accumulation of different contributions over time, ...

  4. Biased Tracers in Redshift Space in the EFT of Large-Scale Structure

    Energy Technology Data Exchange (ETDEWEB)

    Perko, Ashley [Stanford U., Phys. Dept.; Senatore, Leonardo [KIPAC, Menlo Park; Jennings, Elise [Chicago U., KICP; Wechsler, Risa H. [Stanford U., Phys. Dept.

    2016-10-28

    The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipoles $\\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$ and $\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.

  5. Benchmarking processes for managing large international space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.; Duke, Michael B.

    1993-01-01

    The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.

  6. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    Science.gov (United States)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques

  7. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    International Nuclear Information System (INIS)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; Ohmura, Satoshi; Shimamura, Kohei

    2014-01-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10 6 -atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  8. A Simulation and Modeling Framework for Space Situational Awareness

    International Nuclear Information System (INIS)

    Olivier, S.S.

    2008-01-01

    This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated

  9. End-to-end simulations and planning of a small space telescopes: Galaxy Evolution Spectroscopic Explorer: a case study

    Science.gov (United States)

    Heap, Sara; Folta, David; Gong, Qian; Howard, Joseph; Hull, Tony; Purves, Lloyd

    2016-08-01

    Large astronomical missions are usually general-purpose telescopes with a suite of instruments optimized for different wavelength regions, spectral resolutions, etc. Their end-to-end (E2E) simulations are typically photons-in to flux-out calculations made to verify that each instrument meets its performance specifications. In contrast, smaller space missions are usually single-purpose telescopes, and their E2E simulations start with the scientific question to be answered and end with an assessment of the effectiveness of the mission in answering the scientific question. Thus, E2E simulations for small missions consist a longer string of calculations than for large missions, as they include not only the telescope and instrumentation, but also the spacecraft, orbit, and external factors such as coordination with other telescopes. Here, we illustrate the strategy and organization of small-mission E2E simulations using the Galaxy Evolution Spectroscopic Explorer (GESE) as a case study. GESE is an Explorer/Probe-class space mission concept with the primary aim of understanding galaxy evolution. Operation of a small survey telescope in space like GESE is usually simpler than operations of large telescopes driven by the varied scientific programs of the observers or by transient events. Nevertheless, both types of telescopes share two common challenges: maximizing the integration time on target, while minimizing operation costs including communication costs and staffing on the ground. We show in the case of GESE how these challenges can be met through a custom orbit and a system design emphasizing simplification and leveraging information from ground-based telescopes.

  10. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    Science.gov (United States)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  11. WRF nested large-eddy simulations of deep convection during SEAC4RS

    Science.gov (United States)

    Heath, Nicholas K.; Fuelberg, Henry E.; Tanelli, Simone; Turk, F. Joseph; Lawson, R. Paul; Woods, Sarah; Freeman, Sean

    2017-04-01

    Large-eddy simulations (LES) and observations are often combined to increase our understanding and improve the simulation of deep convection. This study evaluates a nested LES method that uses the Weather Research and Forecasting (WRF) model and, specifically, tests whether the nested LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection that occurred during the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) campaign. Mesoscale WRF output (1.35 km grid length) was used to drive a nested LES with 450 m grid spacing, which then drove a 150 m domain. Results reveal that the 450 m nested LES reasonably simulates observed reflectivity distributions and aircraft-observed in-cloud vertical velocities during the study period. However, when examining convective updrafts, reducing the grid spacing to 150 m worsened results. We find that the simulated updrafts in the 150 m run become too diluted by entrainment, thereby generating updrafts that are weaker than observed. Lastly, the 450 m simulation is combined with observations to study the processes forcing strong midlevel cloud/updraft edge downdrafts that were observed on 2 September. Results suggest that these strong downdrafts are forced by evaporative cooling due to mixing and by perturbation pressure forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested LES approach, with further development and evaluation, could potentially provide an effective method for studying deep convection in real-world cases.

  12. Altitude simulation facility for testing large space motors

    Science.gov (United States)

    Katz, U.; Lustig, J.; Cohen, Y.; Malkin, I.

    1993-02-01

    This work describes the design of an altitude simulation facility for testing the AKM motor installed in the 'Ofeq' satellite launcher. The facility, which is controlled by a computer, consists of a diffuser and a single-stage ejector fed with preheated air. The calculations of performance and dimensions of the gas extraction system were conducted according to a one-dimensional analysis. Tests were carried out on a small-scale model of the facility in order to examine the design concept, then the full-scale facility was constructed and operated. There was good agreement among the results obtained from the small-scale facility, from the full-scale facility, and from calculations.

  13. Representative elements: A step to large-scale fracture system simulation

    International Nuclear Information System (INIS)

    Clemo, T.M.

    1987-01-01

    Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs

  14. Macro Level Simulation Model Of Space Shuttle Processing

    Science.gov (United States)

    2000-01-01

    The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.

  15. Planetary and Space Simulation Facilities PSI at DLR for Astrobiology

    Science.gov (United States)

    Rabbow, E.; Rettberg, P.; Panitz, C.; Reitz, G.

    2008-09-01

    Ground based experiments, conducted in the controlled planetary and space environment simulation facilities PSI at DLR, are used to investigate astrobiological questions and to complement the corresponding experiments in LEO, for example on free flying satellites or on space exposure platforms on the ISS. In-orbit exposure facilities can only accommodate a limited number of experiments for exposure to space parameters like high vacuum, intense radiation of galactic and solar origin and microgravity, sometimes also technically adapted to simulate extraterrestrial planetary conditions like those on Mars. Ground based experiments in carefully equipped and monitored simulation facilities allow the investigation of the effects of simulated single environmental parameters and selected combinations on a much wider variety of samples. In PSI at DLR, international science consortia performed astrobiological investigations and space experiment preparations, exposing organic compounds and a wide range of microorganisms, reaching from bacterial spores to complex microbial communities, lichens and even animals like tardigrades to simulated planetary or space environment parameters in pursuit of exobiological questions on the resistance to extreme environments and the origin and distribution of life. The Planetary and Space Simulation Facilities PSI of the Institute of Aerospace Medicine at DLR in Köln, Germany, providing high vacuum of controlled residual composition, ionizing radiation of a X-ray tube, polychromatic UV radiation in the range of 170-400 nm, VIS and IR or individual monochromatic UV wavelengths, and temperature regulation from -20°C to +80°C at the sample size individually or in selected combinations in 9 modular facilities of varying sizes are presented with selected experiments performed within.

  16. Validation of Varian TrueBeam electron phase–spaces for Monte Carlo simulation of MLC-shaped fields

    International Nuclear Information System (INIS)

    Lloyd, Samantha A. M.; Gagne, Isabelle M.; Zavgorodni, Sergei; Bazalova-Carter, Magdalena

    2016-01-01

    Purpose: This work evaluates Varian’s electron phase–space sources for Monte Carlo simulation of the TrueBeam for modulated electron radiation therapy (MERT) and combined, modulated photon and electron radiation therapy (MPERT) where fields are shaped by the photon multileaf collimator (MLC) and delivered at 70 cm SSD. Methods: Monte Carlo simulations performed with EGSnrc-based BEAMnrc/DOSXYZnrc and PENELOPE-based PRIMO are compared against diode measurements for 5 × 5, 10 × 10, and 20 × 20 cm 2 MLC-shaped fields delivered with 6, 12, and 20 MeV electrons at 70 cm SSD (jaws set to 40 × 40 cm 2 ). Depth dose curves and profiles are examined. In addition, EGSnrc-based simulations of relative output as a function of MLC-field size and jaw-position are compared against ion chamber measurements for MLC-shaped fields between 3 × 3 and 25 × 25 cm 2 and jaw positions that range from the MLC-field size to 40 × 40 cm 2 . Results: Percent depth dose curves generated by BEAMnrc/DOSXYZnrc and PRIMO agree with measurement within 2%, 2 mm except for PRIMO’s 12 MeV, 20 × 20 cm 2 field where 90% of dose points agree within 2%, 2 mm. Without the distance to agreement, differences between measurement and simulation are as large as 7.3%. Characterization of simulated dose parameters such as FWHM, penumbra width and depths of 90%, 80%, 50%, and 20% dose agree within 2 mm of measurement for all fields except for the FWHM of the 6 MeV, 20 × 20 cm 2 field which falls within 2 mm distance to agreement. Differences between simulation and measurement exist in the profile shoulders and penumbra tails, in particular for 10 × 10 and 20 × 20 cm 2 fields of 20 MeV electrons, where both sets of simulated data fall short of measurement by as much as 3.5%. BEAMnrc/DOSXYZnrc simulated outputs agree with measurement within 2.3% except for 6 MeV MLC-shaped fields. Discrepancies here are as great as 5.5%. Conclusions: TrueBeam electron phase–spaces available from Varian have been

  17. Analyzing large data sets from XGC1 magnetic fusion simulations using apache spark

    Energy Technology Data Exchange (ETDEWEB)

    Churchill, R. Michael [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-11-21

    Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.

  18. A Data Management System for International Space Station Simulation Tools

    Science.gov (United States)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  19. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  20. Scalar energy fluctuations in Large-Eddy Simulation of turbulent flames: Statistical budgets and mesh quality criterion

    Energy Technology Data Exchange (ETDEWEB)

    Vervisch, Luc; Domingo, Pascale; Lodato, Guido [CORIA - CNRS and INSA de Rouen, Technopole du Madrillet, BP 8, 76801 Saint-Etienne-du-Rouvray (France); Veynante, Denis [EM2C - CNRS and Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry (France)

    2010-04-15

    Large-Eddy Simulation (LES) provides space-filtered quantities to compare with measurements, which usually have been obtained using a different filtering operation; hence, numerical and experimental results can be examined side-by-side in a statistical sense only. Instantaneous, space-filtered and statistically time-averaged signals feature different characteristic length-scales, which can be combined in dimensionless ratios. From two canonical manufactured turbulent solutions, a turbulent flame and a passive scalar turbulent mixing layer, the critical values of these ratios under which measured and computed variances (resolved plus sub-grid scale) can be compared without resorting to additional residual terms are first determined. It is shown that actual Direct Numerical Simulation can hardly accommodate a sufficiently large range of length-scales to perform statistical studies of LES filtered reactive scalar-fields energy budget based on sub-grid scale variances; an estimation of the minimum Reynolds number allowing for such DNS studies is given. From these developments, a reliability mesh criterion emerges for scalar LES and scaling for scalar sub-grid scale energy is discussed. (author)

  1. James Webb Space Telescope Optical Simulation Testbed: Segmented Mirror Phase Retrieval Testing

    Science.gov (United States)

    Laginja, Iva; Egron, Sylvain; Brady, Greg; Soummer, Remi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Mazoyer, Johan; N’Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand

    2018-01-01

    The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a hardware simulator designed to produce JWST-like images. A model of the JWST three mirror anastigmat is realized with three lenses in form of a Cooke Triplet, which provides JWST-like optical quality over a field equivalent to a NIRCam module, and an Iris AO segmented mirror with hexagonal elements is standing in for the JWST segmented primary. This setup successfully produces images extremely similar to NIRCam images from cryotesting in terms of the PSF morphology and sampling relative to the diffraction limit.The testbed is used for staff training of the wavefront sensing and control (WFS&C) team and for independent analysis of WFS&C scenarios of the JWST. Algorithms like geometric phase retrieval (GPR) that may be used in flight and potential upgrades to JWST WFS&C will be explored. We report on the current status of the testbed after alignment, implementation of the segmented mirror, and testing of phase retrieval techniques.This optical bench complements other work at the Makidon laboratory at the Space Telescope Science Institute, including the investigation of coronagraphy for segmented aperture telescopes. Beyond JWST we intend to use JOST for WFS&C studies for future large segmented space telescopes such as LUVOIR.

  2. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  3. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  4. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Tianwu; Yu, Linglin; Zhang, Chong [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Ma, Jianpeng, E-mail: jpma@bcm.tmc.edu [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Verna and Marrs McLean Department of Biochemistry and Molecular Biology, Baylor College of Medicine, One Baylor Plaza, BCM-125, Houston, Texas 77030 (United States)

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  5. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  6. Monte Carlo simulations for the space radiation superconducting shield project (SR2S).

    Science.gov (United States)

    Vuolo, M; Giraudo, M; Musenich, R; Calvelli, V; Ambroglini, F; Burger, W J; Battiston, R

    2016-02-01

    Astronauts on deep-space long-duration missions will be exposed for long time to galactic cosmic rays (GCR) and Solar Particle Events (SPE). The exposure to space radiation could lead to both acute and late effects in the crew members and well defined countermeasures do not exist nowadays. The simplest solution given by optimized passive shielding is not able to reduce the dose deposited by GCRs below the actual dose limits, therefore other solutions, such as active shielding employing superconducting magnetic fields, are under study. In the framework of the EU FP7 SR2S Project - Space Radiation Superconducting Shield--a toroidal magnetic system based on MgB2 superconductors has been analyzed through detailed Monte Carlo simulations using Geant4 interface GRAS. Spacecraft and magnets were modeled together with a simplified mechanical structure supporting the coils. Radiation transport through magnetic fields and materials was simulated for a deep-space mission scenario, considering for the first time the effect of secondary particles produced in the passage of space radiation through the active shielding and spacecraft structures. When modeling the structures supporting the active shielding systems and the habitat, the radiation protection efficiency of the magnetic field is severely decreasing compared to the one reported in previous studies, when only the magnetic field was modeled around the crew. This is due to the large production of secondary radiation taking place in the material surrounding the habitat. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  7. Definition of technology development missions for early space stations: Large space structures

    Science.gov (United States)

    Gates, R. M.; Reid, G.

    1984-01-01

    The objectives studied are the definition of the tested role of an early Space Station for the construction of large space structures. This is accomplished by defining the LSS technology development missions (TDMs) identified in phase 1. Design and operations trade studies are used to identify the best structural concepts and procedures for each TDMs. Details of the TDM designs are then developed along with their operational requirements. Space Station resources required for each mission, both human and physical, are identified. The costs and development schedules for the TDMs provide an indication of the programs needed to develop these missions.

  8. Nesting Large-Eddy Simulations Within Mesoscale Simulations for Wind Energy Applications

    Science.gov (United States)

    Lundquist, J. K.; Mirocha, J. D.; Chow, F. K.; Kosovic, B.; Lundquist, K. A.

    2008-12-01

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES) account for complex terrain and resolve individual atmospheric eddies on length scales smaller than turbine blades. These small-domain high-resolution simulations are possible with a range of commercial and open- source software, including the Weather Research and Forecasting (WRF) model. In addition to "local" sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecating model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosoviæ (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice

    Science.gov (United States)

    Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand

    2018-01-01

    Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.

  10. Reducing cell-to-cell spacing for large-format lithium ion battery modules with aluminum or PCM heat sinks under failure conditions

    International Nuclear Information System (INIS)

    Coleman, Brittany; Ostanek, Jason; Heinzel, John

    2016-01-01

    Highlights: • Finite element analysis to evaluate heat sinks for large format li-ion batteries. • Solid metal heat sink and composite heat sink (metal filler and wax). • Transient simulations show response from rest to steady-state with normal load. • Transient simulations of two different failure modes were considered. • Significance of spacing, material properties, interface quality, and phase change. - Abstract: Thermal management is critical for large-scale, shipboard energy storage systems utilizing lithium-ion batteries. In recent years, there has been growing research in thermal management of lithium-ion battery modules. However, there is little information available on the minimum cell-to-cell spacing limits for indirect, liquid cooled modules when considering heat release during a single cell failure. For this purpose, a generic four-cell module was modeled using finite element analysis to determine the sensitivity of module temperatures to cell spacing. Additionally, the effects of different heat sink materials and interface qualities were investigated. Two materials were considered, a solid aluminum block and a metal/wax composite block. Simulations were run for three different transient load profiles. The first profile simulates sustained high rate operation where the system begins at rest and generates heat continuously until it reaches steady state. And, two failure mode simulations were conducted to investigate block performance during a slow and a fast exothermic reaction, respectively. Results indicate that composite materials can perform well under normal operation and provide some protection against single cell failure; although, for very compact designs, the amount of wax available to absorb heat is reduced and the effectiveness of the phase change material is diminished. The aluminum block design performed well under all conditions, and showed that heat generated during a failure is quickly dissipated to the coolant, even under the

  11. Modeling extreme "Carrington-type" space weather events using three-dimensional global MHD simulations

    Science.gov (United States)

    Ngwira, Chigomezyo M.; Pulkkinen, Antti; Kuznetsova, Maria M.; Glocer, Alex

    2014-06-01

    There is a growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure. In the last two decades, significant progress has been made toward the first-principles modeling of space weather events, and three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, thereby playing a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for the modern global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events with a Dst footprint comparable to the Carrington superstorm of September 1859 based on the estimate by Tsurutani et. al. (2003). Results are presented for a simulation run with "very extreme" constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated induced geoelectric field on the ground to such extreme driving conditions. The model setup is further tested using input data for an observed space weather event of Halloween storm October 2003 to verify the MHD model consistence and to draw additional guidance for future work. This extreme space weather MHD model setup is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in ground-based conductor systems such as power transmission grids. Therefore, our ultimate goal is to explore the level of geoelectric fields that can be induced from an assumed storm of the reported magnitude, i.e., Dst˜=-1600 nT.

  12. Validation of Varian TrueBeam electron phase–spaces for Monte Carlo simulation of MLC-shaped fields

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, Samantha A. M. [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 3P6 5C2 (Canada); Gagne, Isabelle M., E-mail: imgagne@bccancer.bc.ca; Zavgorodni, Sergei [Department of Medical Physics, BC Cancer Agency–Vancouver Island Centre, Victoria, British Columbia V8R 6V5, Canada and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 5C2 (Canada); Bazalova-Carter, Magdalena [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 5C2 (Canada)

    2016-06-15

    Purpose: This work evaluates Varian’s electron phase–space sources for Monte Carlo simulation of the TrueBeam for modulated electron radiation therapy (MERT) and combined, modulated photon and electron radiation therapy (MPERT) where fields are shaped by the photon multileaf collimator (MLC) and delivered at 70 cm SSD. Methods: Monte Carlo simulations performed with EGSnrc-based BEAMnrc/DOSXYZnrc and PENELOPE-based PRIMO are compared against diode measurements for 5 × 5, 10 × 10, and 20 × 20 cm{sup 2} MLC-shaped fields delivered with 6, 12, and 20 MeV electrons at 70 cm SSD (jaws set to 40 × 40 cm{sup 2}). Depth dose curves and profiles are examined. In addition, EGSnrc-based simulations of relative output as a function of MLC-field size and jaw-position are compared against ion chamber measurements for MLC-shaped fields between 3 × 3 and 25 × 25 cm{sup 2} and jaw positions that range from the MLC-field size to 40 × 40 cm{sup 2}. Results: Percent depth dose curves generated by BEAMnrc/DOSXYZnrc and PRIMO agree with measurement within 2%, 2 mm except for PRIMO’s 12 MeV, 20 × 20 cm{sup 2} field where 90% of dose points agree within 2%, 2 mm. Without the distance to agreement, differences between measurement and simulation are as large as 7.3%. Characterization of simulated dose parameters such as FWHM, penumbra width and depths of 90%, 80%, 50%, and 20% dose agree within 2 mm of measurement for all fields except for the FWHM of the 6 MeV, 20 × 20 cm{sup 2} field which falls within 2 mm distance to agreement. Differences between simulation and measurement exist in the profile shoulders and penumbra tails, in particular for 10 × 10 and 20 × 20 cm{sup 2} fields of 20 MeV electrons, where both sets of simulated data fall short of measurement by as much as 3.5%. BEAMnrc/DOSXYZnrc simulated outputs agree with measurement within 2.3% except for 6 MeV MLC-shaped fields. Discrepancies here are as great as 5.5%. Conclusions: TrueBeam electron phase–spaces

  13. Dynamic large eddy simulation: Stability via realizability

    Science.gov (United States)

    Mokhtarpoor, Reza; Heinz, Stefan

    2017-10-01

    The concept of dynamic large eddy simulation (LES) is highly attractive: such methods can dynamically adjust to changing flow conditions, which is known to be highly beneficial. For example, this avoids the use of empirical, case dependent approximations (like damping functions). Ideally, dynamic LES should be local in physical space (without involving artificial clipping parameters), and it should be stable for a wide range of simulation time steps, Reynolds numbers, and numerical schemes. These properties are not trivial, but dynamic LES suffers from such problems over decades. We address these questions by performing dynamic LES of periodic hill flow including separation at a high Reynolds number Re = 37 000. For the case considered, the main result of our studies is that it is possible to design LES that has the desired properties. It requires physical consistency: a PDF-realizable and stress-realizable LES model, which requires the inclusion of the turbulent kinetic energy in the LES calculation. LES models that do not honor such physical consistency can become unstable. We do not find support for the previous assumption that long-term correlations of negative dynamic model parameters are responsible for instability. Instead, we concluded that instability is caused by the stable spatial organization of significant unphysical states, which are represented by wall-type gradient streaks of the standard deviation of the dynamic model parameter. The applicability of our realizability stabilization to other dynamic models (including the dynamic Smagorinsky model) is discussed.

  14. Interplanetary Transit Simulations Using the International Space Station

    Science.gov (United States)

    Charles, J. B.; Arya, Maneesh

    2010-01-01

    It has been suggested that the International Space Station (ISS) be utilized to simulate the transit portion of long-duration missions to Mars and near-Earth asteroids (NEA). The ISS offers a unique environment for such simulations, providing researchers with a high-fidelity platform to study, enhance, and validate technologies and countermeasures for these long-duration missions. From a space life sciences perspective, two major categories of human research activities have been identified that will harness the various capabilities of the ISS during the proposed simulations. The first category includes studies that require the use of the ISS, typically because of the need for prolonged weightlessness. The ISS is currently the only available platform capable of providing researchers with access to a weightless environment over an extended duration. In addition, the ISS offers high fidelity for other fundamental space environmental factors, such as isolation, distance, and accessibility. The second category includes studies that do not require use of the ISS in the strictest sense, but can exploit its use to maximize their scientific return more efficiently and productively than in ground-based simulations. In addition to conducting Mars and NEA simulations on the ISS, increasing the current increment duration on the ISS from 6 months to a longer duration will provide opportunities for enhanced and focused research relevant to long-duration Mars and NEA missions. Although it is currently believed that increasing the ISS crew increment duration to 9 or even 12 months will pose little additional risk to crewmembers, additional medical monitoring capabilities may be required beyond those currently used for the ISS operations. The use of the ISS to simulate aspects of Mars and NEA missions seems practical, and it is recommended that planning begin soon, in close consultation with all international partners.

  15. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  16. Simulation of space-charge effects in an ungated GEM-based TPC

    Energy Technology Data Exchange (ETDEWEB)

    Böhmer, F.V., E-mail: felix.boehmer@tum.de; Ball, M.; Dørheim, S.; Höppner, C.; Ketzer, B.; Konorov, I.; Neubert, S.; Paul, S.; Rauch, J.; Vandenbroucke, M.

    2013-08-11

    A fundamental limit to the application of Time Projection Chambers (TPCs) in high-rate experiments is the accumulation of slowly drifting ions in the active gas volume, which compromises the homogeneity of the drift field and hence the detector resolution. Conventionally, this problem is overcome by the use of ion-gating structures. This method, however, introduces large dead times and restricts trigger rates to a few hundred per second. The ion gate can be eliminated from the setup by the use of Gas Electron Multiplier (GEM) foils for gas amplification, which intrinsically suppress the backflow of ions. This makes the continuous operation of a TPC at high rates feasible. In this work, Monte Carlo simulations of the buildup of ion space charge in a GEM-based TPC and the correction of the resulting drift distortions are discussed, based on realistic numbers for the ion backflow in a triple-GEM amplification stack. A TPC in the future P{sup ¯}ANDA experiment at FAIR serves as an example for the experimental environment. The simulations show that space charge densities up to 65 fC cm{sup −3} are reached, leading to electron drift distortions of up to 10 mm. The application of a laser calibration system to correct these distortions is investigated. Based on full simulations of the detector physics and response, we show that it is possible to correct for the drift distortions and to maintain the good momentum resolution of the GEM-TPC.

  17. Simulation of space-charge effects in an ungated GEM-based TPC

    International Nuclear Information System (INIS)

    Böhmer, F.V.; Ball, M.; Dørheim, S.; Höppner, C.; Ketzer, B.; Konorov, I.; Neubert, S.; Paul, S.; Rauch, J.; Vandenbroucke, M.

    2013-01-01

    A fundamental limit to the application of Time Projection Chambers (TPCs) in high-rate experiments is the accumulation of slowly drifting ions in the active gas volume, which compromises the homogeneity of the drift field and hence the detector resolution. Conventionally, this problem is overcome by the use of ion-gating structures. This method, however, introduces large dead times and restricts trigger rates to a few hundred per second. The ion gate can be eliminated from the setup by the use of Gas Electron Multiplier (GEM) foils for gas amplification, which intrinsically suppress the backflow of ions. This makes the continuous operation of a TPC at high rates feasible. In this work, Monte Carlo simulations of the buildup of ion space charge in a GEM-based TPC and the correction of the resulting drift distortions are discussed, based on realistic numbers for the ion backflow in a triple-GEM amplification stack. A TPC in the future P ¯ ANDA experiment at FAIR serves as an example for the experimental environment. The simulations show that space charge densities up to 65 fC cm −3 are reached, leading to electron drift distortions of up to 10 mm. The application of a laser calibration system to correct these distortions is investigated. Based on full simulations of the detector physics and response, we show that it is possible to correct for the drift distortions and to maintain the good momentum resolution of the GEM-TPC

  18. Psychosocial value of space simulation for extended spaceflight

    Science.gov (United States)

    Kanas, N.

    1997-01-01

    There have been over 60 studies of Earth-bound activities that can be viewed as simulations of manned spaceflight. These analogs have involved Antarctic and Arctic expeditions, submarines and submersible simulators, land-based simulators, and hypodynamia environments. None of these analogs has accounted for all the variables related to extended spaceflight (e.g., microgravity, long-duration, heterogeneous crews), and some of the stimulation conditions have been found to be more representative of space conditions than others. A number of psychosocial factors have emerged from the simulation literature that correspond to important issues that have been reported from space. Psychological factors include sleep disorders, alterations in time sense, transcendent experiences, demographic issues, career motivation, homesickness, and increased perceptual sensitivities. Psychiatric factors include anxiety, depression, psychosis, psychosomatic symptoms, emotional reactions related to mission stage, asthenia, and postflight personality, and marital problems. Finally, interpersonal factors include tension resulting from crew heterogeneity, decreased cohesion over time, need for privacy, and issues involving leadership roles and lines of authority. Since future space missions will usually involve heterogeneous crews working on complicated objectives over long periods of time, these features require further study. Socio-cultural factors affecting confined crews (e.g., language and dialect, cultural differences, gender biases) should be explored in order to minimize tension and sustain performance. Career motivation also needs to be examined for the purpose of improving crew cohesion and preventing subgrouping, scapegoating, and territorial behavior. Periods of monotony and reduced activity should be addressed in order to maintain morale, provide meaningful use of leisure time, and prevent negative consequences of low stimulation, such as asthenia and crew member withdrawal

  19. Structural-electromagnetic bidirectional coupling analysis of space large film reflector antennas

    Science.gov (United States)

    Zhang, Xinghua; Zhang, Shuxin; Cheng, ZhengAi; Duan, Baoyan; Yang, Chen; Li, Meng; Hou, Xinbin; Li, Xun

    2017-10-01

    As used for energy transmission, a space large film reflector antenna (SLFRA) is characterized by large size and enduring high power density. The structural flexibility and the microwave radiation pressure (MRP) will lead to the phenomenon of structural-electromagnetic bidirectional coupling (SEBC). In this paper, the SEBC model of SLFRA is presented, then the deformation induced by the MRP and the corresponding far field pattern deterioration are simulated. Results show that, the direction of the MRP is identical to the normal of the reflector surface, and the magnitude is proportional to the power density and the square of cosine incident angle. For a typical cosine function distributed electric field, the MRP is a square of cosine distributed across the diameter. The maximum deflections of SLFRA linearly increase with the increasing microwave power densities and the square of the reflector diameters, and vary inversely with the film thicknesses. When the reflector diameter becomes 100 m large and the microwave power density exceeds 102 W/cm2, the gain loss of the 6.3 μm-thick reflector goes beyond 0.75 dB. When the MRP-induced deflection degrades the reflector performance, the SEBC should be taken into account.

  20. Large Eddy Simulation of turbulence

    International Nuclear Information System (INIS)

    Poullet, P.; Sancandi, M.

    1994-12-01

    Results of Large Eddy Simulation of 3D isotropic homogeneous turbulent flows are presented. A computer code developed on Connexion Machine (CM5) has allowed to compare two turbulent viscosity models (Smagorinsky and structure function). The numerical scheme influence on the energy density spectrum is also studied [fr

  1. Remapping simulated halo catalogues in redshift space

    OpenAIRE

    Mead, Alexander; Peacock, John

    2014-01-01

    We discuss the extension to redshift space of a rescaling algorithm, designed to alter the effective cosmology of a pre-existing simulated particle distribution or catalogue of dark matter haloes. The rescaling approach was initially developed by Angulo & White and was adapted and applied to halo catalogues in real space in our previous work. This algorithm requires no information other than the initial and target cosmological parameters, and it contains no tuned parameters. It is shown here ...

  2. Deep Space Navigation and Timing Architecture and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Microcosm will develop a deep space navigation and timing architecture and associated simulation, incorporating state-of-the art radiometric, x-ray pulsar, and laser...

  3. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    Science.gov (United States)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  4. Analysis of Waves in Space Plasma (WISP) near field simulation and experiment

    Science.gov (United States)

    Richie, James E.

    1992-01-01

    The WISP payload scheduler for a 1995 space transportation system (shuttle flight) will include a large power transmitter on board at a wide range of frequencies. The levels of electromagnetic interference/electromagnetic compatibility (EMI/EMC) must be addressed to insure the safety of the shuttle crew. This report is concerned with the simulation and experimental verification of EMI/EMC for the WISP payload in the shuttle cargo bay. The simulations have been carried out using the method of moments for both thin wires and patches to stimulate closed solids. Data obtained from simulation is compared with experimental results. An investigation of the accuracy of the modeling approach is also included. The report begins with a description of the WISP experiment. A description of the model used to simulate the cargo bay follows. The results of the simulation are compared to experimental data on the input impedance of the WISP antenna with the cargo bay present. A discussion of the methods used to verify the accuracy of the model is shown to illustrate appropriate methods for obtaining this information. Finally, suggestions for future work are provided.

  5. Simulation analysis of photometric data for attitude estimation of unresolved space objects

    Science.gov (United States)

    Du, Xiaoping; Gou, Ruixin; Liu, Hao; Hu, Heng; Wang, Yang

    2017-10-01

    The attitude information acquisition of unresolved space objects, such as micro-nano satellites and GEO objects under the way of ground-based optical observations, is a challenge to space surveillance. In this paper, a useful method is proposed to estimate the SO attitude state according to the simulation analysis of photometric data in different attitude states. The object shape model was established and the parameters of the BRDF model were determined, then the space object photometric model was established. Furthermore, the photometric data of space objects in different states are analyzed by simulation and the regular characteristics of the photometric curves are summarized. The simulation results show that the photometric characteristics are useful for attitude inversion in a unique way. Thus, a new idea is provided for space object identification in this paper.

  6. Research and development at the Marshall Space Flight Center Neutral Buoyancy Simulator

    Science.gov (United States)

    Kulpa, Vygantas P.

    1987-01-01

    The Neutral Buoyancy Simulator (NBS), a facility designed to imitate zero-gravity conditions, was used to test the Experimental Assembly of Structures in Extravehicular Activity (EASE) and the Assembly Concept for Construction of Erectable Space Structures (ACCESS). Neutral Buoyancy Simulator applications and operations; early space structure research; development of the EASE/ACCESS experiments; and improvement of NBS simulation are summarized.

  7. Space Power Facility (SPF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Space Power Facility (SPF) houses the world's largest space environment simulation chamber, measuring 100 ft. in diameter by 122 ft. high. In this chamber, large...

  8. Simulating cosmic microwave background maps in multiconnected spaces

    International Nuclear Information System (INIS)

    Riazuelo, Alain; Uzan, Jean-Philippe; Lehoucq, Roland; Weeks, Jeffrey

    2004-01-01

    This paper describes the computation of cosmic microwave background (CMB) anisotropies in a universe with multiconnected spatial sections and focuses on the implementation of the topology in standard CMB computer codes. The key ingredient is the computation of the eigenmodes of the Laplacian with boundary conditions compatible with multiconnected space topology. The correlators of the coefficients of the decomposition of the temperature fluctuation in spherical harmonics are computed and examples are given for spatially flat spaces and one family of spherical spaces, namely, the lens spaces. Under the hypothesis of Gaussian initial conditions, these correlators encode all the topological information of the CMB and suffice to simulate CMB maps

  9. Commercial applications of large-scale Research and Development computer simulation technologies

    International Nuclear Information System (INIS)

    Kuok Mee Ling; Pascal Chen; Wen Ho Lee

    1998-01-01

    The potential commercial applications of two large-scale R and D computer simulation technologies are presented. One such technology is based on the numerical solution of the hydrodynamics equations, and is embodied in the two-dimensional Eulerian code EULE2D, which solves the hydrodynamic equations with various models for the equation of state (EOS), constitutive relations and fracture mechanics. EULE2D is an R and D code originally developed to design and analyze conventional munitions for anti-armor penetrations such as shaped charges, explosive formed projectiles, and kinetic energy rods. Simulated results agree very well with actual experiments. A commercial application presented here is the design and simulation of shaped charges for oil and gas well bore perforation. The other R and D simulation technology is based on the numerical solution of Maxwell's partial differential equations of electromagnetics in space and time, and is implemented in the three-dimensional code FDTD-SPICE, which solves Maxwell's equations in the time domain with finite-differences in the three spatial dimensions and calls SPICE for information when nonlinear active devices are involved. The FDTD method has been used in the radar cross-section modeling of military aircrafts and many other electromagnetic phenomena. The coupling of FDTD method with SPICE, a popular circuit and device simulation program, provides a powerful tool for the simulation and design of microwave and millimeter-wave circuits containing nonlinear active semiconductor devices. A commercial application of FDTD-SPICE presented here is the simulation of a two-element active antenna system. The simulation results and the experimental measurements are in excellent agreement. (Author)

  10. Vertical integration from the large Hilbert space

    Science.gov (United States)

    Erler, Theodore; Konopka, Sebastian

    2017-12-01

    We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.

  11. Large eddy simulation of turbulent mixing in a T-junction

    International Nuclear Information System (INIS)

    Kim, Jung Woo

    2010-12-01

    In this report, large eddy simulation was performed in order to further improve our understanding the physics of turbulent mixing in a T-junction, which is recently regarded as one of the most important problems in nuclear thermal-hydraulics safety. Large eddy simulation technique and the other numerical methods used in this study were presented in Sec. 2, and the numerical results obtained from large eddy simulation were described in Sec. 3. Finally, the summary was written in Sec. 4

  12. Next Generation Simulation Framework for Robotic and Human Space Missions

    Science.gov (United States)

    Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven

    2012-01-01

    The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.

  13. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  14. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  15. Large Eddy Simulation of Supersonic Boundary Layer Transition over a Flat-Plate Based on the Spatial Mode

    Directory of Open Access Journals (Sweden)

    Suozhu Wang

    2014-02-01

    Full Text Available The large eddy simulation (LES of spatially evolving supersonic boundary layer transition over a flat-plate with freestream Mach number 4.5 is performed in the present work. The Favre-filtered Navier-Stokes equations are used to simulate large scales, while a dynamic mixed subgrid-scale (SGS model is used to simulate subgrid stress. The convective terms are discretized with a fifth-order upwind compact difference scheme, while a sixth-order symmetric compact difference scheme is employed for the diffusive terms. The basic mean flow is obtained from the similarity solution of the compressible laminar boundary layer. In order to ensure the transition from the initial laminar flow to fully developed turbulence, a pair of oblique first-mode perturbation is imposed on the inflow boundary. The whole process of the spatial transition is obtained from the simulation. Through the space-time average, the variations of typical statistical quantities are analyzed. It is found that the distributions of turbulent Mach number, root-mean-square (rms fluctuation quantities, and Reynolds stresses along the wall-normal direction at different streamwise locations exhibit self-similarity in fully developed turbulent region. Finally, the onset and development of large-scale coherent structures through the transition process are depicted.

  16. Numerical simulation of large deformation polycrystalline plasticity

    International Nuclear Information System (INIS)

    Inal, K.; Neale, K.W.; Wu, P.D.; MacEwen, S.R.

    2000-01-01

    A finite element model based on crystal plasticity has been developed to simulate the stress-strain response of sheet metal specimens in uniaxial tension. Each material point in the sheet is considered to be a polycrystalline aggregate of FCC grains. The Taylor theory of crystal plasticity is assumed. The numerical analysis incorporates parallel computing features enabling simulations of realistic models with large number of grains. Simulations have been carried out for the AA3004-H19 aluminium alloy and the results are compared with experimental data. (author)

  17. Extremophiles survival to simulated space conditions: an astrobiology model study.

    Science.gov (United States)

    Mastascusa, V; Romano, I; Di Donato, P; Poli, A; Della Corte, V; Rotundi, A; Bussoletti, E; Quarto, M; Pugliese, M; Nicolaus, B

    2014-09-01

    In this work we investigated the ability of four extremophilic bacteria from Archaea and Bacteria domains to resist to space environment by exposing them to extreme conditions of temperature, UV radiation, desiccation coupled to low pressure generated in a Mars' conditions simulator. All the investigated extremophilic strains (namely Sulfolobus solfataricus, Haloterrigena hispanica, Thermotoga neapolitana and Geobacillus thermantarcticus) showed a good resistance to the simulation of the temperature variation in the space; on the other hand irradiation with UV at 254 nm affected only slightly the growth of H. hispanica, G. thermantarcticus and S. solfataricus; finally exposition to Mars simulated condition showed that H. hispanica and G. thermantarcticus were resistant to desiccation and low pressure.

  18. Nonterrestrial material processing and manufacturing of large space systems

    Science.gov (United States)

    Von Tiesenhausen, G.

    1979-01-01

    Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.

  19. A Simulation Base Investigation of High Latency Space Systems Operations

    Science.gov (United States)

    Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; Moore, Michael

    2017-01-01

    NASA's human space program has developed considerable experience with near Earth space operations. Although NASA has experience with deep space robotic missions, NASA has little substantive experience with human deep space operations. Even in the Apollo program, the missions lasted only a few weeks and the communication latencies were on the order of seconds. Human missions beyond the relatively close confines of the Earth-Moon system will involve missions with durations measured in months and communications latencies measured in minutes. To minimize crew risk and to maximize mission success, NASA needs to develop a better understanding of the implications of these types of mission durations and communication latencies on vehicle design, mission design and flight controller interaction with the crew. To begin to address these needs, NASA performed a study using a physics-based subsystem simulation to investigate the interactions between spacecraft crew and a ground-based mission control center for vehicle subsystem operations across long communication delays. The simulation, built with a subsystem modeling tool developed at NASA's Johnson Space Center, models the life support system of a Mars transit vehicle. The simulation contains models of the cabin atmosphere and pressure control system, electrical power system, drinking and waste water systems, internal and external thermal control systems, and crew metabolic functions. The simulation has three interfaces: 1) a real-time crew interface that can be use to monitor and control the vehicle subsystems; 2) a mission control center interface with data transport delays up to 15 minutes each way; 3) a real-time simulation test conductor interface that can be use to insert subsystem malfunctions and observe the interactions between the crew, ground, and simulated vehicle. The study was conducted at the 21st NASA Extreme Environment Mission Operations (NEEMO) mission between July 18th and Aug 3rd of year 2016. The NEEMO

  20. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  1. Simulation of the space debris environment in LEO using a simplified approach

    Science.gov (United States)

    Kebschull, Christopher; Scheidemann, Philipp; Hesselbach, Sebastian; Radtke, Jonas; Braun, Vitali; Krag, H.; Stoll, Enrico

    2017-01-01

    Several numerical approaches exist to simulate the evolution of the space debris environment. These simulations usually rely on the propagation of a large population of objects in order to determine the collision probability for each object. Explosion and collision events are triggered randomly using a Monte-Carlo (MC) approach. So in many different scenarios different objects are fragmented and contribute to a different version of the space debris environment. The results of the single Monte-Carlo runs therefore represent the whole spectrum of possible evolutions of the space debris environment. For the comparison of different scenarios, in general the average of all MC runs together with its standard deviation is used. This method is computationally very expensive due to the propagation of thousands of objects over long timeframes and the application of the MC method. At the Institute of Space Systems (IRAS) a model capable of describing the evolution of the space debris environment has been developed and implemented. The model is based on source and sink mechanisms, where yearly launches as well as collisions and explosions are considered as sources. The natural decay and post mission disposal measures are the only sink mechanisms. This method reduces the computational costs tremendously. In order to achieve this benefit a few simplifications have been applied. The approach of the model partitions the Low Earth Orbit (LEO) region into altitude shells. Only two kinds of objects are considered, intact bodies and fragments, which are also divided into diameter bins. As an extension to a previously presented model the eccentricity has additionally been taken into account with 67 eccentricity bins. While a set of differential equations has been implemented in a generic manner, the Euler method was chosen to integrate the equations for a given time span. For this paper parameters have been derived so that the model is able to reflect the results of the numerical MC

  2. A Coordinated Initialization Process for the Distributed Space Exploration Simulation (DSES)

    Science.gov (United States)

    Phillips, Robert; Dexter, Dan; Hasan, David; Crues, Edwin Z.

    2007-01-01

    This document describes the federate initialization process that was developed at the NASA Johnson Space Center with the HIIA Transfer Vehicle Flight Controller Trainer (HTV FCT) simulations and refined in the Distributed Space Exploration Simulation (DSES). These simulations use the High Level Architecture (HLA) IEEE 1516 to provide the communication and coordination between the distributed parts of the simulation. The purpose of the paper is to describe a generic initialization sequence that can be used to create a federate that can: 1. Properly initialize all HLA objects, object instances, interactions, and time management 2. Check for the presence of all federates 3. Coordinate startup with other federates 4. Robustly initialize and share initial object instance data with other federates.

  3. Efficient numerical simulation of non-integer-order space-fractional reaction-diffusion equation via the Riemann-Liouville operator

    Science.gov (United States)

    Owolabi, Kolade M.

    2018-03-01

    In this work, we are concerned with the solution of non-integer space-fractional reaction-diffusion equations with the Riemann-Liouville space-fractional derivative in high dimensions. We approximate the Riemann-Liouville derivative with the Fourier transform method and advance the resulting system in time with any time-stepping solver. In the numerical experiments, we expect the travelling wave to arise from the given initial condition on the computational domain (-∞, ∞), which we terminate in the numerical experiments with a large but truncated value of L. It is necessary to choose L large enough to allow the waves to have enough space to distribute. Experimental results in high dimensions on the space-fractional reaction-diffusion models with applications to biological models (Fisher and Allen-Cahn equations) are considered. Simulation results reveal that fractional reaction-diffusion equations can give rise to a range of physical phenomena when compared to non-integer-order cases. As a result, most meaningful and practical situations are found to be modelled with the concept of fractional calculus.

  4. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  5. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  6. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  7. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  8. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  9. Desdemona and a ticket to space; training for space flight in a 3g motion simulator

    NARCIS (Netherlands)

    Wouters, M.

    2014-01-01

    On October 5, 2013, Marijn Wouters and two other contestants of a nation-wide competition ‘Nederland Innoveert’ underwent a space training exercise. One by one, the trainees were pushed to their limits in the Desdemona motion simulator, an experience that mimicked the Space Expedition Corporation

  10. Large-eddy simulation of contrails

    Energy Technology Data Exchange (ETDEWEB)

    Chlond, A [Max-Planck-Inst. fuer Meteorologie, Hamburg (Germany)

    1998-12-31

    A large eddy simulation (LES) model has been used to investigate the role of various external parameters and physical processes in the life-cycle of contrails. The model is applied to conditions that are typical for those under which contrails could be observed, i.e. in an atmosphere which is supersaturated with respect to ice and at a temperature of approximately 230 K or colder. The sensitivity runs indicate that the contrail evolution is controlled primarily by humidity, temperature and static stability of the ambient air and secondarily by the baroclinicity of the atmosphere. Moreover, it turns out that the initial ice particle concentration and radiative processes are of minor importance in the evolution of contrails at least during the 30 minutes simulation period. (author) 9 refs.

  11. Large-eddy simulation of contrails

    Energy Technology Data Exchange (ETDEWEB)

    Chlond, A. [Max-Planck-Inst. fuer Meteorologie, Hamburg (Germany)

    1997-12-31

    A large eddy simulation (LES) model has been used to investigate the role of various external parameters and physical processes in the life-cycle of contrails. The model is applied to conditions that are typical for those under which contrails could be observed, i.e. in an atmosphere which is supersaturated with respect to ice and at a temperature of approximately 230 K or colder. The sensitivity runs indicate that the contrail evolution is controlled primarily by humidity, temperature and static stability of the ambient air and secondarily by the baroclinicity of the atmosphere. Moreover, it turns out that the initial ice particle concentration and radiative processes are of minor importance in the evolution of contrails at least during the 30 minutes simulation period. (author) 9 refs.

  12. On asymptotically efficient simulation of large deviation probabilities.

    NARCIS (Netherlands)

    Dieker, A.B.; Mandjes, M.R.H.

    2005-01-01

    ABSTRACT: Consider a family of probabilities for which the decay is governed by a large deviation principle. To find an estimate for a fixed member of this family, one is often forced to use simulation techniques. Direct Monte Carlo simulation, however, is often impractical, particularly if the

  13. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  14. Large Eddy Simulation for Compressible Flows

    CERN Document Server

    Garnier, E; Sagaut, P

    2009-01-01

    Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...

  15. Characteristics and prediction of sound level in extra-large spaces

    OpenAIRE

    Wang, C.; Ma, H.; Wu, Y.; Kang, J.

    2018-01-01

    This paper aims to examine sound fields in extra-large spaces, which are defined in this paper as spaces used by people, with a volume approximately larger than 125,000m 3 and absorption coefficient less than 0.7. In such spaces inhomogeneous reverberant energy caused by uneven early reflections with increasing volume has a significant effect on sound fields. Measurements were conducted in four spaces to examine the attenuation of the total and reverberant energy with increasing source-receiv...

  16. De-individualized psychophysiological strain assessment during a flight simulation test—Validation of a space methodology

    Science.gov (United States)

    Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen

    For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.

  17. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    Science.gov (United States)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal

  18. Large-Eddy Simulations of Flows in Complex Terrain

    Science.gov (United States)

    Kosovic, B.; Lundquist, K. A.

    2011-12-01

    Large-eddy simulation as a methodology for numerical simulation of turbulent flows was first developed to study turbulent flows in atmospheric by Lilly (1967). The first LES were carried by Deardorff (1970) who used these simulations to study atmospheric boundary layers. Ever since, LES has been extensively used to study canonical atmospheric boundary layers, in most cases flat plate boundary layers under the assumption of horizontal homogeneity. Carefully designed LES of canonical convective and neutrally stratified and more recently stably stratified atmospheric boundary layers have contributed significantly to development of better understanding of these flows and their parameterizations in large scale models. These simulations were often carried out using codes specifically designed and developed for large-eddy simulations of horizontally homogeneous flows with periodic lateral boundary conditions. Recent developments in multi-scale numerical simulations of atmospheric flows enable numerical weather prediction (NWP) codes such as ARPS (Chow and Street, 2009), COAMPS (Golaz et al., 2009) and Weather Research and Forecasting model, to be used nearly seamlessly across a wide range of atmospheric scales from synoptic down to turbulent scales in atmospheric boundary layers. Before we can with confidence carry out multi-scale simulations of atmospheric flows, NWP codes must be validated for accurate performance in simulating flows over complex or inhomogeneous terrain. We therefore carry out validation of WRF-LES for simulations of flows over complex terrain using data from Askervein Hill (Taylor and Teunissen, 1985, 1987) and METCRAX (Whiteman et al., 2008) field experiments. WRF's nesting capability is employed with a one-way nested inner domain that includes complex terrain representation while the coarser outer nest is used to spin up fully developed atmospheric boundary layer turbulence and thus represent accurately inflow to the inner domain. LES of a

  19. An alternative phase-space distribution to sample initial conditions for classical dynamics simulations

    International Nuclear Information System (INIS)

    Garcia-Vela, A.

    2002-01-01

    A new quantum-type phase-space distribution is proposed in order to sample initial conditions for classical trajectory simulations. The phase-space distribution is obtained as the modulus of a quantum phase-space state of the system, defined as the direct product of the coordinate and momentum representations of the quantum initial state. The distribution is tested by sampling initial conditions which reproduce the initial state of the Ar-HCl cluster prepared by ultraviolet excitation, and by simulating the photodissociation dynamics by classical trajectories. The results are compared with those of a wave packet calculation, and with a classical simulation using an initial phase-space distribution recently suggested. A better agreement is found between the classical and the quantum predictions with the present phase-space distribution, as compared with the previous one. This improvement is attributed to the fact that the phase-space distribution propagated classically in this work resembles more closely the shape of the wave packet propagated quantum mechanically

  20. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  1. Interfacing Space Communications and Navigation Network Simulation with Distributed System Integration Laboratories (DSIL)

    Science.gov (United States)

    Jennings, Esther H.; Nguyen, Sam P.; Wang, Shin-Ywan; Woo, Simon S.

    2008-01-01

    NASA's planned Lunar missions will involve multiple NASA centers where each participating center has a specific role and specialization. In this vision, the Constellation program (CxP)'s Distributed System Integration Laboratories (DSIL) architecture consist of multiple System Integration Labs (SILs), with simulators, emulators, testlabs and control centers interacting with each other over a broadband network to perform test and verification for mission scenarios. To support the end-to-end simulation and emulation effort of NASA' exploration initiatives, different NASA centers are interconnected to participate in distributed simulations. Currently, DSIL has interconnections among the following NASA centers: Johnson Space Center (JSC), Kennedy Space Center (KSC), Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL). Through interconnections and interactions among different NASA centers, critical resources and data can be shared, while independent simulations can be performed simultaneously at different NASA locations, to effectively utilize the simulation and emulation capabilities at each center. Furthermore, the development of DSIL can maximally leverage the existing project simulation and testing plans. In this work, we describe the specific role and development activities at JPL for Space Communications and Navigation Network (SCaN) simulator using the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) tool to simulate communications effects among mission assets. Using MACHETE, different space network configurations among spacecrafts and ground systems of various parameter sets can be simulated. Data that is necessary for tracking, navigation, and guidance of spacecrafts such as Crew Exploration Vehicle (CEV), Crew Launch Vehicle (CLV), and Lunar Relay Satellite (LRS) and orbit calculation data are disseminated to different NASA centers and updated periodically using the High Level Architecture (HLA). In

  2. Large space antenna concepts for ESGP

    Science.gov (United States)

    Love, Allan W.

    1989-01-01

    It is appropriate to note that 1988 marks the 100th anniversary of the birth of the reflector antenna. It was in 1888 that Heinrich Hertz constructed the first one, a parabolic cylinder made of sheet zinc bent to shape and supported by a wooden frame. Hertz demonstrated the existence of the electromagnetic waves that had been predicted theoretically by James Clerk Maxwell some 22 years earlier. In the 100 years since Hertz's pioneering work the field of electromagnetics has grown explosively: one of the technologies is that of remote sensing of planet Earth by means of electromagnetic waves, using both passive and active sensors located on an Earth Science Geostationary Platform (ESEP). For these purposes some exquisitely sensitive instruments were developed, capable of reaching to the fringes of the known universe, and relying on large reflector antennas to collect the minute signals and direct them to appropriate receiving devices. These antennas are electrically large, with diameters of 3000 to 10,000 wavelengths and with gains approaching 80 to 90 dB. Some of the reflector antennas proposed for ESGP are also electrically large. For example, at 220 GHz a 4-meter reflector is nearly 3000 wavelengths in diameter, and is electrically quite comparable with a number of the millimeter wave radiotelescopes that are being built around the world. Its surface must meet stringent requirements on rms smoothness, and ability to resist deformation. Here, however, the environmental forces at work are different. There are no varying forces due to wind and gravity, but inertial forces due to mechanical scanning must be reckoned with. With this form of beam scanning, minimizing momentum transfer to the space platform is a problem that demands an answer. Finally, reflector surface distortion due to thermal gradients caused by the solar flux probably represents the most challenging problem to be solved if these Large Space Antennas are to achieve the gain and resolution required of

  3. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  4. Large Deployable Reflector (LDR) Requirements for Space Station Accommodations

    Science.gov (United States)

    Crowe, D. A.; Clayton, M. J.; Runge, F. C.

    1985-01-01

    Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.

  5. Large Deployable Reflector (LDR) requirements for space station accommodations

    Science.gov (United States)

    Crowe, D. A.; Clayton, M. J.; Runge, F. C.

    1985-04-01

    Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.

  6. Large eddy simulation of premixed and non-premixed combustion

    OpenAIRE

    Malalasekera, W; Ibrahim, SS; Masri, AR; Sadasivuni, SK; Gubba, SR

    2010-01-01

    This paper summarises the authors experience in using the Large Eddy Simulation (LES) technique for the modelling of premixed and non-premixed combustion. The paper describes the application of LES based combustion modelling technique to two well defined experimental configurations where high quality data is available for validation. The large eddy simulation technique for the modelling flow and turbulence is based on the solution of governing equations for continuity and momentum in a struct...

  7. Saving time in a space-efficient simulation algorithm

    NARCIS (Netherlands)

    Markovski, J.

    2011-01-01

    We present an efficient algorithm for computing the simulation preorder and equivalence for labeled transition systems. The algorithm improves an existing space-efficient algorithm and improves its time complexity by employing a variant of the stability condition and exploiting properties of the

  8. Research of Impact Load in Large Electrohydraulic Load Simulator

    Directory of Open Access Journals (Sweden)

    Yongguang Liu

    2014-01-01

    Full Text Available The stronger impact load will appear in the initial phase when the large electric cylinder is tested in the hardware-in-loop simulation. In this paper, the mathematical model is built based on AMESim, and then the reason of the impact load is investigated through analyzing the changing tendency of parameters in the simulation results. The inhibition methods of impact load are presented according to the structural invariability principle and applied to the actual system. The final experimental result indicates that the impact load is inhibited, which provides a good experimental condition for the electric cylinder and promotes the study of large load simulator.

  9. Large data management and systematization of simulation

    International Nuclear Information System (INIS)

    Ueshima, Yutaka; Saitho, Kanji; Koga, James; Isogai, Kentaro

    2004-01-01

    In the advanced photon research large-scale simulations are powerful tools. In the numerical experiments, real-time visualization and steering system are thought as hopeful methods of data analysis. This approach is valid in the stereotype analysis at one time or short-cycle simulation. In the research for an unknown problem, it is necessary that the output data can be analyzed many times because profitable analysis is difficult at the first time. Consequently, output data should be filed to refer and analyze at any time. To support the research, we need the followed automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The Large Data Management system will be functional Problem Solving Environment distributed system. (author)

  10. Direct and large-eddy simulation IX

    CERN Document Server

    Kuerten, Hans; Geurts, Bernard; Armenio, Vincenzo

    2015-01-01

    This volume reflects the state of the art of numerical simulation of transitional and turbulent flows and provides an active forum for discussion of recent developments in simulation techniques and understanding of flow physics. Following the tradition of earlier DLES workshops, these papers address numerous theoretical and physical aspects of transitional and turbulent flows. At an applied level it contributes to the solution of problems related to energy production, transportation, magneto-hydrodynamics and the environment. A special session is devoted to quality issues of LES. The ninth Workshop on 'Direct and Large-Eddy Simulation' (DLES-9) was held in Dresden, April 3-5, 2013, organized by the Institute of Fluid Mechanics at Technische Universität Dresden. This book is of interest to scientists and engineers, both at an early level in their career and at more senior levels.

  11. Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2014-01-01

    Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.

  12. Simulated Space Environment Effects on a Candidate Solar Sail Material

    Science.gov (United States)

    Kang, Jin Ho; Bryant, Robert G.; Wilkie, W. Keats; Wadsworth, Heather M.; Craven, Paul D.; Nehls, Mary K.; Vaughn, Jason A.

    2017-01-01

    For long duration missions of solar sails, the sail material needs to survive harsh space environments and the degradation of the sail material controls operational lifetime. Therefore, understanding the effects of the space environment on the sail membrane is essential for mission success. In this study, we investigated the effect of simulated space environment effects of ionizing radiation, thermal aging and simulated potential damage on mechanical, thermal and optical properties of a commercial off the shelf (COTS) polyester solar sail membrane to assess the degradation mechanisms on a feasible solar sail. The solar sail membrane was exposed to high energy electrons (about 70 keV and 10 nA/cm2), and the physical properties were characterized. After about 8.3 Grad dose, the tensile modulus, tensile strength and failure strain of the sail membrane decreased by about 20 95%. The aluminum reflective layer was damaged and partially delaminated but it did not show any significant change in solar absorbance or thermal emittance. The effect on mechanical properties of a pre-cracked sample, simulating potential impact damage of the sail membrane, as well as thermal aging effects on metallized PEN (polyethylene naphthalate) film will be discussed.

  13. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  14. Simulation of space charge effects and transition crossing in the Fermilab Booster

    International Nuclear Information System (INIS)

    Lucas, P.; MacLachlan, J.

    1987-03-01

    The longitudinal phase space program ESME, modified for space charge and wall impedance effects, has been used to simulate transition crossing in the Fermilab Booster. The simulations yield results in reasonable quantitative agreement with measured parameters. They further indicate that a transition jump scheme currently under construction will significantly reduce emittance growth, while attempts to alter machine impedance are less obviously beneficial. In addition to presenting results, this paper points out a serious difficulty, related to statistical fluctuations, in the space charge calculation. False indications of emittance growth can appear if care is not taken to minimize this problem

  15. Simulations of Large-Area Electron Beam Diodes

    Science.gov (United States)

    Swanekamp, S. B.; Friedman, M.; Ludeking, L.; Smithe, D.; Obenschain, S. P.

    1999-11-01

    Large area electron beam diodes are typically used to pump the amplifiers of KrF lasers. Simulations of large-area electron beam diodes using the particle-in-cell code MAGIC3D have shown the electron flow in the diode to be unstable. Since this instability can potentially produce a non-uniform current and energy distribution in the hibachi structure and lasing medium it can be detrimental to laser efficiency. These results are similar to simulations performed using the ISIS code.(M.E. Jones and V.A. Thomas, Proceedings of the 8^th) International Conference on High-Power Particle Beams, 665 (1990). We have identified the instability as the so called ``transit-time" instability(C.K. Birdsall and W.B. Bridges, Electrodynamics of Diode Regions), (Academic Press, New York, 1966).^,(T.M. Antonsen, W.H. Miner, E. Ott, and A.T. Drobot, Phys. Fluids 27), 1257 (1984). and have investigated the role of the applied magnetic field and diode geometry. Experiments are underway to characterize the instability on the Nike KrF laser system and will be compared to simulation. Also some possible ways to mitigate the instability will be presented.

  16. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  17. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  18. Methods for Prediction of Steel Temperature Curve in the Whole Process of a Localized Fire in Large Spaces

    Directory of Open Access Journals (Sweden)

    Zhang Guowei

    2014-01-01

    Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.

  19. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Sidles, John A; Jacky, Jonathan P [Department of Orthopaedics and Sports Medicine, Box 356500, School of Medicine, University of Washington, Seattle, WA, 98195 (United States); Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M [Department of Mechanical Engineering, University of Washington, Seattle, WA 98195 (United States); Harrell, Lee E [Department of Physics, US Military Academy, West Point, NY 10996 (United States); Hero, Alfred O [Department of Electrical Engineering, University of Michigan, MI 49931 (United States); Norman, Anthony G [Department of Bioengineering, University of Washington, Seattle, WA 98195 (United States)], E-mail: sidles@u.washington.edu

    2009-06-15

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  20. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    International Nuclear Information System (INIS)

    Sidles, John A; Jacky, Jonathan P; Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M; Harrell, Lee E; Hero, Alfred O; Norman, Anthony G

    2009-01-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  1. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    Science.gov (United States)

    Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.

    2009-06-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  2. The politics of space mining - An account of a simulation game

    Science.gov (United States)

    Paikowsky, Deganit; Tzezana, Roey

    2018-01-01

    Celestial bodies like the Moon and asteroids contain materials and precious metals, which are valuable for human activity on Earth and beyond. Space mining has been mainly relegated to the realm of science fiction, and was not treated seriously by the international community. The private industry is starting to assemble towards space mining, and success on this front would have major impact on all nations. We present in this paper a review of current space mining ventures, and the international legislation, which could stand in their way - or aid them in their mission. Following that, we present the results of a role-playing simulation in which the role of several important nations was played by students of international relations. The results of the simulation are used as a basis for forecasting the potential initial responses of the nations of the world to a successful space mining operation in the future.

  3. Simulation requirements for the Large Deployable Reflector (LDR)

    Science.gov (United States)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  4. Pressure fluctuation prediction in pump mode using large eddy simulation and unsteady Reynolds-averaged Navier–Stokes in a pump–turbine

    Directory of Open Access Journals (Sweden)

    De-You Li

    2016-06-01

    Full Text Available For pump–turbines, most of the instabilities couple with high-level pressure fluctuations, which are harmful to pump–turbines, even the whole units. In order to understand the causes of pressure fluctuations and reduce their amplitudes, proper numerical methods should be chosen to obtain the accurate results. The method of large eddy simulation with wall-adapting local eddy-viscosity model was chosen to predict the pressure fluctuations in pump mode of a pump–turbine compared with the method of unsteady Reynolds-averaged Navier–Stokes with two-equation turbulence model shear stress transport k–ω. Partial load operating point (0.91QBEP under 15-mm guide vane opening was selected to make a comparison of performance and frequency characteristics between large eddy simulation and unsteady Reynolds-averaged Navier–Stokes based on the experimental validation. Good agreement indicates that the method of large eddy simulation could be applied in the simulation of pump–turbines. Then, a detailed comparison of variation for peak-to-peak value in the whole passage was presented. Both the methods show that the highest level pressure fluctuations occur in the vaneless space. In addition, the propagation of amplitudes of blade pass frequency, 2 times of blade pass frequency, and 3 times of blade pass frequency in the circumferential and flow directions was investigated. Although the difference exists between large eddy simulation and unsteady Reynolds-averaged Navier–Stokes, the trend of variation in different parts is almost the same. Based on the analysis, using the same mesh (8 million, large eddy simulation underestimates pressure characteristics and shows a better result compared with the experiments, while unsteady Reynolds-averaged Navier–Stokes overestimates them.

  5. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  6. Field simulations for large dipole magnets

    International Nuclear Information System (INIS)

    Lazzaro, A.; Cappuzzello, F.; Cunsolo, A.; Cavallaro, M.; Foti, A.; Khouaja, A.; Orrigo, S.E.A.; Winfield, J.S.

    2007-01-01

    The problem of the description of magnetic field for large bending magnets is addressed in relation to the requirements of modern techniques of trajectory reconstruction. The crucial question of the interpolation and extrapolation of fields known at a discrete number of points is analysed. For this purpose a realistic field model of the large dipole of the MAGNEX spectrometer, obtained with finite elements three dimensional simulations, is used. The influence of the uncertainties in the measured field to the quality of the trajectory reconstruction is treated in detail. General constraints for field measurements in terms of required resolutions, step sizes and precisions are thus extracted

  7. Large-eddy simulation of sand dune morphodynamics

    Science.gov (United States)

    Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team

    2015-11-01

    Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.

  8. Weightless environment simulation test; Mujuryo simulation shiken

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, K.; Yamamoto, T.; Kato, F. [Kawasaki Heavy Industries, Ltd., Kobe (Japan)

    1997-07-20

    Kawasaki Heavy Industries, Ltd., delivered a Weightless Environment Test System (WETS) to National Space Development Agency of Japan in 1994. This system creates a weightless environment similar to that in space by balancing gravity and buoyancy in the water, and is constituted of a large water tank, facilities to supply air and cooling water to space suits worn in the water, etc. In this report, a weightless environment simulation test and the facilities to supply air and cooling water are described. In the weightless environment simulation test, the astronaut to undergo tests and training wears a space suit quite similar to the suit worn on the orbit, and performs EVA/IVA (extravehicular activities/intravehicular activities) around a JEM (Japanese Experimental Module) mockup installed in the water verifying JEM design specifications, preparing manuals for operations on the orbit, or receives basic space-related drill and training. An EVA weightless environment simulation test No. 3 was accomplished with success in January, 1997, when the supply of breathing water and cooling water to the space suit, etc., were carried out with safety and reliability. 2 refs., 8 figs., 2 tabs.

  9. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  10. World, We Have Problems: Simulation for Large Complex, Risky Projects, and Events

    Science.gov (United States)

    Elfrey, Priscilla

    2010-01-01

    Prior to a spacewalk during the NASA STS/129 mission in November 2009, Columbia Broadcasting System (CBS) correspondent William Harwood reported astronauts, "were awakened again", as they had been the day previously. Fearing something not properly connected was causing a leak, the crew, both on the ground and in space, stopped and checked everything. The alarm proved false. The crew did complete its work ahead of schedule, but the incident reminds us that correctly connecting hundreds and thousands of entities, subsystems and systems, finding leaks, loosening stuck valves, and adding replacements to very large complex systems over time does not occur magically. Everywhere major projects present similar pressures. Lives are at - risk. Responsibility is heavy. Large natural and human-created disasters introduce parallel difficulties as people work across boundaries their countries, disciplines, languages, and cultures with known immediate dangers as well as the unexpected. NASA has long accepted that when humans have to go where humans cannot go that simulation is the sole solution. The Agency uses simulation to achieve consensus, reduce ambiguity and uncertainty, understand problems, make decisions, support design, do planning and troubleshooting, as well as for operations, training, testing, and evaluation. Simulation is at the heart of all such complex systems, products, projects, programs, and events. Difficult, hazardous short and, especially, long-term activities have a persistent need for simulation from the first insight into a possibly workable idea or answer until the final report perhaps beyond our lifetime is put in the archive. With simulation we create a common mental model, try-out breakdowns of machinery or teamwork, and find opportunity for improvement. Lifecycle simulation proves to be increasingly important as risks and consequences intensify. Across the world, disasters are increasing. We anticipate more of them, as the results of global warming

  11. Assessment of Retrofitting Measures for a Large Historic Research Facility Using a Building Energy Simulation Model

    Directory of Open Access Journals (Sweden)

    Young Tae Chae

    2016-06-01

    Full Text Available A calibrated building simulation model was developed to assess the energy performance of a large historic research building. The complexity of space functions and operational conditions with limited availability of energy meters makes it hard to understand the end-used energy consumption in detail and to identify appropriate retrofitting options for reducing energy consumption and greenhouse gas (GHG emissions. An energy simulation model was developed to study the energy usage patterns not only at a building level, but also of the internal thermal zones, and system operations. The model was validated using site measurements of energy usage and a detailed audit of the internal load conditions, system operation, and space programs to minimize the discrepancy between the documented status and actual operational conditions. Based on the results of the calibrated model and end-used energy consumption, the study proposed potential energy conservation measures (ECMs for the building envelope, HVAC system operational methods, and system replacement. It also evaluated each ECM from the perspective of both energy and utility cost saving potentials to help retrofitting plan decision making. The study shows that the energy consumption of the building was highly dominated by the thermal requirements of laboratory spaces. Among other ECMs the demand management option of overriding the setpoint temperature is the most cost effective measure.

  12. Large eddy simulation in a rotary blood pump: Viscous shear stress computation and comparison with unsteady Reynolds-averaged Navier-Stokes simulation.

    Science.gov (United States)

    Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik

    2018-06-01

    Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.

  13. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  14. Large eddy simulation of hydrodynamic cavitation

    Science.gov (United States)

    Bhatt, Mrugank; Mahesh, Krishnan

    2017-11-01

    Large eddy simulation is used to study sheet to cloud cavitation over a wedge. The mixture of water and water vapor is represented using a homogeneous mixture model. Compressible Navier-Stokes equations for mixture quantities along with transport equation for vapor mass fraction employing finite rate mass transfer between the two phases, are solved using the numerical method of Gnanaskandan and Mahesh. The method is implemented on unstructured grid with parallel MPI capabilities. Flow over a wedge is simulated at Re = 200 , 000 and the performance of the homogeneous mixture model is analyzed in predicting different regimes of sheet to cloud cavitation; namely, incipient, transitory and periodic, as observed in the experimental investigation of Harish et al.. This work is supported by the Office of Naval Research.

  15. Large Eddy Simulation of Film-Cooling Jets

    Science.gov (United States)

    Iourokina, Ioulia

    2005-11-01

    Large Eddy Simulation of inclined jets issuing into a turbulent boundary layer crossflow has been performed. The simulation models film-cooling experiments of Pietrzyk et al. (J. of. Turb., 1989), consisting of a large plenum feeding an array of jets inclined at 35° to the flat surface with a pitch 3D and L/D=3.5. The blowing ratio is 0.5 with unity density ratio. The numerical method used is a hybrid combining external compressible solver with a low-Mach number code for the plenum and film holes. Vorticity dynamics pertinent to jet-in-crossflow interactions is analyzed and three-dimensional vortical structures are revealed. Turbulence statistics are compared to the experimental data. The turbulence production due to shearing in the crossflow is compared to that within the jet hole. The influence of three-dimensional coherent structures on the wall heat transfer is investigated and strategies to increase film- cooling performance are discussed.

  16. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications

    Science.gov (United States)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray

    2012-01-01

    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  17. Development of automation and robotics for space via computer graphic simulation methods

    Science.gov (United States)

    Fernandez, Ken

    1988-01-01

    A robot simulation system, has been developed to perform automation and robotics system design studies. The system uses a procedure-oriented solid modeling language to produce a model of the robotic mechanism. The simulator generates the kinematics, inverse kinematics, dynamics, control, and real-time graphic simulations needed to evaluate the performance of the model. Simulation examples are presented, including simulation of the Space Station and the design of telerobotics for the Orbital Maneuvering Vehicle.

  18. Simulated Space Environmental Effects on Thin Film Solar Array Components

    Science.gov (United States)

    Finckenor, Miria; Carr, John; SanSoucie, Michael; Boyd, Darren; Phillips, Brandon

    2017-01-01

    The Lightweight Integrated Solar Array and Transceiver (LISA-T) experiment consists of thin-film, low mass, low volume solar panels. Given the variety of thin solar cells and cover materials and the lack of environmental protection typically afforded by thick coverglasses, a series of tests were conducted in Marshall Space Flight Center's Space Environmental Effects Facility to evaluate the performance of these materials. Candidate thin polymeric films and nitinol wires used for deployment were also exposed. Simulated space environment exposures were selected based on SSP 30425 rev. B, "Space Station Program Natural Environment Definition for Design" or AIAA Standard S-111A-2014, "Qualification and Quality Requirements for Space Solar Cells." One set of candidate materials were exposed to 5 eV atomic oxygen and concurrent vacuum ultraviolet (VUV) radiation for low Earth orbit simulation. A second set of materials were exposed to 1 MeV electrons. A third set of samples were exposed to 50, 100, 500, and 700 keV energy protons, and a fourth set were exposed to >2,000 hours of near ultraviolet (NUV) radiation. A final set was rapidly thermal cycled between -55 and +125degC. This test series provides data on enhanced power generation, particularly for small satellites with reduced mass and volume resources. Performance versus mass and cost per Watt is discussed.

  19. Space headache on Earth: head-down-tilted bed rest studies simulating outer-space microgravity.

    Science.gov (United States)

    van Oosterhout, W P J; Terwindt, G M; Vein, A A; Ferrari, M D

    2015-04-01

    Headache is a common symptom during space travel, both isolated and as part of space motion syndrome. Head-down-tilted bed rest (HDTBR) studies are used to simulate outer space microgravity on Earth, and allow countermeasure interventions such as artificial gravity and training protocols, aimed at restoring microgravity-induced physiological changes. The objectives of this article are to assess headache incidence and characteristics during HDTBR, and to evaluate the effects of countermeasures. In a randomized cross-over design by the European Space Agency (ESA), 22 healthy male subjects, without primary headache history, underwent three periods of -6-degree HDTBR. In two of these episodes countermeasure protocols were added, with either centrifugation or aerobic exercise training protocols. Headache occurrence and characteristics were daily assessed using a specially designed questionnaire. In total 14/22 (63.6%) subjects reported a headache during ≥1 of the three HDTBR periods, in 12/14 (85.7%) non-specific, and two of 14 (14.4%) migraine. The occurrence of headache did not differ between HDTBR with and without countermeasures: 12/22 (54.5%) subjects vs. eight of 22 (36.4%) subjects; p = 0.20; 13/109 (11.9%) headache days vs. 36/213 (16.9%) headache days; p = 0.24). During countermeasures headaches were, however, more often mild (p = 0.03) and had fewer associated symptoms (p = 0.008). Simulated microgravity during HDTBR induces headache episodes, mostly on the first day. Countermeasures are useful in reducing headache severity and associated symptoms. Reversible, microgravity-induced cephalic fluid shift may cause headache, also on Earth. HDTBR can be used to study space headache on Earth. © International Headache Society 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  20. Large anterior temporal Virchow-Robin spaces: unique MR imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Anthony T. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Chandra, Ronil V. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Department of Surgery, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia); Trost, Nicholas M. [St Vincent' s Hospital, Neuroradiology Service, Melbourne (Australia); McKelvie, Penelope A. [St Vincent' s Hospital, Anatomical Pathology, Melbourne (Australia); Stuckey, Stephen L. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Southern Clinical School, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia)

    2015-05-01

    Large Virchow-Robin (VR) spaces may mimic cystic tumor. The anterior temporal subcortical white matter is a recently described preferential location, with only 18 reported cases. Our aim was to identify unique MR features that could increase prospective diagnostic confidence. Thirty-nine cases were identified between November 2003 and February 2014. Demographic, clinical data and the initial radiological report were retrospectively reviewed. Two neuroradiologists reviewed all MR imaging; a neuropathologist reviewed histological data. Median age was 58 years (range 24-86 years); the majority (69 %) was female. There were no clinical symptoms that could be directly referable to the lesion. Two thirds were considered to be VR spaces on the initial radiological report. Mean maximal size was 9 mm (range 5-17 mm); majority (79 %) had perilesional T2 or fluid-attenuated inversion recovery (FLAIR) hyperintensity. The following were identified as potential unique MR features: focal cortical distortion by an adjacent branch of the middle cerebral artery (92 %), smaller adjacent VR spaces (26 %), and a contiguous cerebrospinal fluid (CSF) intensity tract (21 %). Surgery was performed in three asymptomatic patients; histopathology confirmed VR spaces. Unique MR features were retrospectively identified in all three patients. Large anterior temporal lobe VR spaces commonly demonstrate perilesional T2 or FLAIR signal and can be misdiagnosed as cystic tumor. Potential unique MR features that could increase prospective diagnostic confidence include focal cortical distortion by an adjacent branch of the middle cerebral artery, smaller adjacent VR spaces, and a contiguous CSF intensity tract. (orig.)

  1. Optimizing grade-control drillhole spacing with conditional simulations

    Directory of Open Access Journals (Sweden)

    Adrian Martínez-Vargas

    2017-01-01

    Full Text Available This paper summarizes a method to determine the optimum spacing of grade-control drillholes drilled with reverse-circulation. The optimum drillhole spacing was defined as that one whose cost equals the cost of misclassifying ore and waste in selection mining units (SMU. The cost of misclassification of a given drillhole spacing is equal to the cost of processing waste misclassified as ore (Type I error plus the value of the ore misclassified as waste (Type II error. Type I and Type II errors were deduced by comparing true and estimated grades at SMUs, in relation to a cuttoff grade value and assuming free ore selection. True grades at SMUs and grades at drillhole samples were generated with conditional simulations. A set of estimated grades at SMU, one per each drillhole spacing, were generated with ordinary kriging. This method was used to determine the optimum drillhole spacing in a gold deposit. The results showed that the cost of misclassification is sensitive to extreme block values and tend to be overrepresented. Capping SMU’s lost values and implementing diggability constraints was recommended to improve calculations of total misclassification costs.

  2. Lattice models for large-scale simulations of coherent wave scattering

    Science.gov (United States)

    Wang, Shumin; Teixeira, Fernando L.

    2004-01-01

    Lattice approximations for partial differential equations describing physical phenomena are commonly used for the numerical simulation of many problems otherwise intractable by pure analytical approaches. The discretization inevitably leads to many of the original symmetries to be broken or modified. In the case of Maxwell’s equations for example, invariance and isotropy of the speed of light in vacuum is invariably lost because of the so-called grid dispersion. Since it is a cumulative effect, grid dispersion is particularly harmful for the accuracy of results of large-scale simulations of scattering problems. Grid dispersion is usually combated by either increasing the lattice resolution or by employing higher-order schemes with larger stencils for the space and time derivatives. Both alternatives lead to increased computational cost to simulate a problem of a given physical size. Here, we introduce a general approach to develop lattice approximations with reduced grid dispersion error for a given stencil (and hence at no additional computational cost). The present approach is based on first obtaining stencil coefficients in the Fourier domain that minimize the maximum grid dispersion error for wave propagation at all directions (minimax sense). The resulting coefficients are then expanded into a Taylor series in terms of the frequency variable and incorporated into time-domain (update) equations after an inverse Fourier transformation. Maximally flat (Butterworth) or Chebyshev filters are subsequently used to minimize the wave speed variations for a given frequency range of interest. The use of such filters also allows for the adjustment of the grid dispersion characteristics so as to minimize not only the local dispersion error but also the accumulated phase error in a frequency range of interest.

  3. High Level Architecture Distributed Space System Simulation for Simulation Interoperability Standards Organization Simulation Smackdown

    Science.gov (United States)

    Li, Zuqun

    2011-01-01

    Modeling and Simulation plays a very important role in mission design. It not only reduces design cost, but also prepares astronauts for their mission tasks. The SISO Smackdown is a simulation event that facilitates modeling and simulation in academia. The scenario of this year s Smackdown was to simulate a lunar base supply mission. The mission objective was to transfer Earth supply cargo to a lunar base supply depot and retrieve He-3 to take back to Earth. Federates for this scenario include the environment federate, Earth-Moon transfer vehicle, lunar shuttle, lunar rover, supply depot, mobile ISRU plant, exploratory hopper, and communication satellite. These federates were built by teams from all around the world, including teams from MIT, JSC, University of Alabama in Huntsville, University of Bordeaux from France, and University of Genoa from Italy. This paper focuses on the lunar shuttle federate, which was programmed by the USRP intern team from NASA JSC. The shuttle was responsible for provide transportation between lunar orbit and the lunar surface. The lunar shuttle federate was built using the NASA standard simulation package called Trick, and it was extended with HLA functions using TrickHLA. HLA functions of the lunar shuttle federate include sending and receiving interaction, publishing and subscribing attributes, and packing and unpacking fixed record data. The dynamics model of the lunar shuttle was modeled with three degrees of freedom, and the state propagation was obeying the law of two body dynamics. The descending trajectory of the lunar shuttle was designed by first defining a unique descending orbit in 2D space, and then defining a unique orbit in 3D space with the assumption of a non-rotating moon. Finally this assumption was taken away to define the initial position of the lunar shuttle so that it will start descending a second after it joins the execution. VPN software from SonicWall was used to connect federates with RTI during testing

  4. Space-charge-dominated beam dynamics simulations using the massively parallel processors (MPPs) of the Cray T3D

    International Nuclear Information System (INIS)

    Liu, H.

    1996-01-01

    Computer simulations using the multi-particle code PARMELA with a three-dimensional point-by-point space charge algorithm have turned out to be very helpful in supporting injector commissioning and operations at Thomas Jefferson National Accelerator Facility (Jefferson Lab, formerly called CEBAF). However, this algorithm, which defines a typical N 2 problem in CPU time scaling, is very time-consuming when N, the number of macro-particles, is large. Therefore, it is attractive to use massively parallel processors (MPPs) to speed up the simulations. Motivated by this, the authors modified the space charge subroutine for using the MPPs of the Cray T3D. The techniques used to parallelize and optimize the code on the T3D are discussed in this paper. The performance of the code on the T3D is examined in comparison with a Parallel Vector Processing supercomputer of the Cray C90 and an HP 735/15 high-end workstation

  5. Monte Carlo simulation of a medical linear accelerator for generation of phase spaces

    International Nuclear Information System (INIS)

    Oliveira, Alex C.H.; Santana, Marcelo G.; Lima, Fernando R.A.; Vieira, Jose W.

    2013-01-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation are linear accelerators (Linacs) which produce beams of X-rays in the range 5-30 MeV. Among the many algorithms developed over recent years for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC methods allow simulating the transport of ionizing radiation in complex configurations, such as detectors, Linacs, phantoms, etc. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. og millions of particles (photos, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). The objective of this work is to create a computational model of a 6 MeV Linac using the MC code Geant4 for generation of phase spaces. From the phase space, information was obtained to asses beam quality (photon and electron spectra and two-dimensional distribution of energy) and analyze the physical processes involved in producing the beam. (author)

  6. The General-Use Nodal Network Solver (GUNNS) Modeling Package for Space Vehicle Flow System Simulation

    Science.gov (United States)

    Harvey, Jason; Moore, Michael

    2013-01-01

    The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.

  7. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui; Hsu, Shu-wei; McNamara, Ann; Keyser, John

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  8. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  9. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  10. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  11. Simulating and assessing boson sampling experiments with phase-space representations

    Science.gov (United States)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  12. Large-eddy simulation of the temporal mixing layer using the Clark model

    NARCIS (Netherlands)

    Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.

    1996-01-01

    The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,

  13. Modeling and Simulation for Multi-Missions Space Exploration Vehicle

    Science.gov (United States)

    Chang, Max

    2011-01-01

    Asteroids and Near-Earth Objects [NEOs] are of great interest for future space missions. The Multi-Mission Space Exploration Vehicle [MMSEV] is being considered for future Near Earth Object missions and requires detailed planning and study of its Guidance, Navigation, and Control [GNC]. A possible mission of the MMSEV to a NEO would be to navigate the spacecraft to a stationary orbit with respect to the rotating asteroid and proceed to anchor into the surface of the asteroid with robotic arms. The Dynamics and Real-Time Simulation [DARTS] laboratory develops reusable models and simulations for the design and analysis of missions. In this paper, the development of guidance and anchoring models are presented together with their role in achieving mission objectives and relationships to other parts of the simulation. One important aspect of guidance is in developing methods to represent the evolution of kinematic frames related to the tasks to be achieved by the spacecraft and its robot arms. In this paper, we compare various types of mathematical interpolation methods for position and quaternion frames. Subsequent work will be on analyzing the spacecraft guidance system with different movements of the arms. With the analyzed data, the guidance system can be adjusted to minimize the errors in performing precision maneuvers.

  14. Effect of empty buckets on coupled bunch instability in RHIC Booster: Longitudinal phase-space simulation

    International Nuclear Information System (INIS)

    Bogacz, S.A.; Griffin, J.E.; Khiari, F.Z.

    1988-05-01

    Excitation of large amplitude coherent dipole bunch oscillations by beam induced voltages in spurious narrow resonances are simulated using a longitudinal phase-space tracking code (ESME). Simulation of the developing instability in a high intensity proton beam driven by a spurious parasitic resonance of the rf cavities allows one to estimate the final longitudinal emittance of the beam at the end of the cycle, which puts serious limitations on the machine performance. The growth of the coupled bunch modes is significantly enhanced if a gap of missing bunches is present, which is an inherent feature of the high intensity proton machines. A strong transient excitation of the parasitic resonance by the Fourier components of the beam spectrum resulting from the presence of the gap is suggested as a possible mechanism of this enhancement. 10 refs., 4 figs., 1 tab

  15. An Improved Treatment of AC Space Charge Fields in Large Signal Simulation Codes

    National Research Council Canada - National Science Library

    Dialetis, D; Chernin, D; Antonsen, Jr., T. M; Levush, B

    2006-01-01

    An accurate representation of the AC space charge electric field is required in order to be able to predict the performance of linear beam tubes, including TWT's and klystrons, using a steady state...

  16. Low-Power Large-Area Radiation Detector for Space Science Measurements

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this task is to develop a low-power, large-area detectors from SiC, taking advantage of very low thermal noise characteristics and high radiation...

  17. Frequency Domain Modeling and Simulation of DC Power Electronic Systems Using Harmonic State Space Method

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Blaabjerg, Frede

    2017-01-01

    For the efficiency and simplicity of electric systems, the dc power electronic systems are widely used in a variety of applications such as electric vehicles, ships, aircraft and also in homes. In these systems, there could be a number of dynamic interactions and frequency coupling between network...... with different switching frequency or harmonics from ac-dc converters makes that harmonics and frequency coupling are both problems of ac system and challenges of dc system. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling...

  18. Prime focus architectures for large space telescopes: reduce surfaces to save cost

    Science.gov (United States)

    Breckinridge, J. B.; Lillie, C. F.

    2016-07-01

    Conceptual architectures are now being developed to identify future directions for post JWST large space telescope systems to operate in the UV Optical and near IR regions of the spectrum. Here we show that the cost of optical surfaces within large aperture telescope/instrument systems can exceed $100M/reflection when expressed in terms of the aperture increase needed to over come internal absorption loss. We recommend a program in innovative optical design to minimize the number of surfaces by considering multiple functions for mirrors. An example is given using the Rowland circle imaging spectrometer systems for UV space science. With few exceptions, current space telescope architectures are based on systems optimized for ground-based astronomy. Both HST and JWST are classical "Cassegrain" telescopes derived from the ground-based tradition to co-locate the massive primary mirror and the instruments at the same end of the metrology structure. This requirement derives from the dual need to minimize observatory dome size and cost in the presence of the Earth's 1-g gravitational field. Space telescopes, however function in the zero gravity of space and the 1- g constraint is relieved to the advantage of astronomers. Here we suggest that a prime focus large aperture telescope system in space may have potentially have higher transmittance, better pointing, improved thermal and structural control, less internal polarization and broader wavelength coverage than Cassegrain telescopes. An example is given showing how UV astronomy telescopes use single optical elements for multiple functions and therefore have a minimum number of reflections.

  19. Functional requirements for design of the Space Ultrareliable Modular Computer (SUMC) system simulator

    Science.gov (United States)

    Curran, R. T.; Hornfeck, W. A.

    1972-01-01

    The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.

  20. Large breast compressions: Observations and evaluation of simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J. [Centre of Medical Image Computing, UCL, London WC1E 6BT, United Kingdom and Computer Vision Laboratory, ETH Zuerich, 8092 Zuerich (Switzerland); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Department of Imaging, UCL Hospital, London NW1 2BU (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom)

    2011-02-15

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  1. Large breast compressions: observations and evaluation of simulations.

    Science.gov (United States)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A; Douek, Michael; Hawkes, David J

    2011-02-01

    Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs. 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast shapes than when using

  2. Large breast compressions: Observations and evaluation of simulations

    International Nuclear Information System (INIS)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.

    2011-01-01

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  3. The daylighting dashboard - A simulation-based design analysis for daylit spaces

    Energy Technology Data Exchange (ETDEWEB)

    Reinhart, Christoph F. [Harvard University, Graduate School of Design, 48 Quincy Street, Cambridge, MA 02138 (United States); Wienold, Jan [Fraunhofer Institute for Solar Energy Systems, Heidenhofstrasse 2, 79110 Freiburg (Germany)

    2011-02-15

    This paper presents a vision of how state-of-the-art computer-based analysis techniques can be effectively used during the design of daylit spaces. Following a review of recent advances in dynamic daylight computation capabilities, climate-based daylighting metrics, occupant behavior and glare analysis, a fully integrated design analysis method is introduced that simultaneously considers annual daylight availability, visual comfort and energy use: Annual daylight glare probability profiles are combined with an occupant behavior model in order to determine annual shading profiles and visual comfort conditions throughout a space. The shading profiles are then used to calculate daylight autonomy plots, energy loads, operational energy costs and green house gas emissions. The paper then shows how simulation results for a sidelit space can be visually presented to simulation non-experts using the concept of a daylighting dashboard. The paper ends with a discussion of how the daylighting dashboard could be practically implemented using technologies that are available today. (author)

  4. An IBM PC-based math model for space station solar array simulation

    Science.gov (United States)

    Emanuel, E. M.

    1986-01-01

    This report discusses and documents the design, development, and verification of a microcomputer-based solar cell math model for simulating the Space Station's solar array Initial Operational Capability (IOC) reference configuration. The array model is developed utilizing a linear solar cell dc math model requiring only five input parameters: short circuit current, open circuit voltage, maximum power voltage, maximum power current, and orbit inclination. The accuracy of this model is investigated using actual solar array on orbit electrical data derived from the Solar Array Flight Experiment/Dynamic Augmentation Experiment (SAFE/DAE), conducted during the STS-41D mission. This simulator provides real-time simulated performance data during the steady state portion of the Space Station orbit (i.e., array fully exposed to sunlight). Eclipse to sunlight transients and shadowing effects are not included in the analysis, but are discussed briefly. Integrating the Solar Array Simulator (SAS) into the Power Management and Distribution (PMAD) subsystem is also discussed.

  5. Power conditioning for large dc motors for space flight applications

    Science.gov (United States)

    Veatch, Martin S.; Anderson, Paul M.; Eason, Douglas J.; Landis, David M.

    1988-01-01

    The design and performance of a prototype power-conditioning system for use with large brushless dc motors on NASA space missions are discussed in detail and illustrated with extensive diagrams, drawings, and graphs. The 5-kW 8-phase parallel module evaluated here would be suitable for use in the Space Shuttle Orbiter cargo bay. A current-balancing magnetic assembly with low distributed inductance permits high-speed current switching from a low-voltage bus as well as current balancing between parallel MOSFETs.

  6. Enhanced 2D-DOA Estimation for Large Spacing Three-Parallel Uniform Linear Arrays

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2018-01-01

    Full Text Available An enhanced two-dimensional direction of arrival (2D-DOA estimation algorithm for large spacing three-parallel uniform linear arrays (ULAs is proposed in this paper. Firstly, we use the propagator method (PM to get the highly accurate but ambiguous estimation of directional cosine. Then, we use the relationship between the directional cosine to eliminate the ambiguity. This algorithm not only can make use of the elements of the three-parallel ULAs but also can utilize the connection between directional cosine to improve the estimation accuracy. Besides, it has satisfied estimation performance when the elevation angle is between 70° and 90° and it can automatically pair the estimated azimuth and elevation angles. Furthermore, it has low complexity without using any eigen value decomposition (EVD or singular value decompostion (SVD to the covariance matrix. Simulation results demonstrate the effectiveness of our proposed algorithm.

  7. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  8. Aero-Acoustic Modelling using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Shen, W Z; Soerensen, J N

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data

  9. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    Science.gov (United States)

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  10. Large eddy simulation of breaking waves

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Deigaard, Rolf

    2001-01-01

    A numerical model is used to simulate wave breaking, the large scale water motions and turbulence induced by the breaking process. The model consists of a free surface model using the surface markers method combined with a three-dimensional model that solves the flow equations. The turbulence....... The incoming waves are specified by a flux boundary condition. The waves are approaching in the shore-normal direction and are breaking on a plane, constant slope beach. The first few wave periods are simulated by a two-dimensional model in the vertical plane normal to the beach line. The model describes...... the steepening and the overturning of the wave. At a given instant, the model domain is extended to three dimensions, and the two-dimensional flow field develops spontaneously three-dimensional flow features with turbulent eddies. After a few wave periods, stationary (periodic) conditions are achieved...

  11. Space charge and magnet error simulations for the SNS accumulator ring

    International Nuclear Information System (INIS)

    Beebe-Wang, J.; Fedotov, A.V.; Wei, J.; Machida, S.

    2000-01-01

    The effects of space charge forces and magnet errors in the beam of the Spallation Neutron Source (SNS) accumulator ring are investigated. In this paper, the focus is on the emittance growth and halo/tail formation in the beam due to space charge with and without magnet errors. The beam properties of different particle distributions resulting from various injection painting schemes are investigated. Different working points in the design of SNS accumulator ring lattice are compared. The simulations in close-to-resonance condition in the presence of space charge and magnet errors are presented. (author)

  12. Design and simulation of betavoltaic battery using large-grain polysilicon

    International Nuclear Information System (INIS)

    Yao, Shulin; Song, Zijun; Wang, Xiang; San, Haisheng; Yu, Yuxi

    2012-01-01

    In this paper, we present the design and simulation of a p–n junction betavoltaic battery based on large-grain polysilicon. By the Monte Carlo simulation, the average penetration depth were obtained, according to which the optimal depletion region width was designed. The carriers transport model of large-grain polysilicon is used to determine the diffusion length of minority carrier. By optimizing the doping concentration, the maximum power conversion efficiency can be achieved to be 0.90% with a 10 mCi/cm 2 Ni-63 source radiation. - Highlights: ► Ni 63 is employed as the pure beta radioisotope source. ► The planar p–n junction betavoltaic battery is based on large-grain polysilicon. ► The carriers transport model of large-grain polysilicon is used to determine the diffusion length of minority carrier. ► The average penetration depth was obtained by using the Monte Carlo Method.

  13. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  14. Hot air impingement on a flat plate using Large Eddy Simulation (LES) technique

    Science.gov (United States)

    Plengsa-ard, C.; Kaewbumrung, M.

    2018-01-01

    Impinging hot gas jets to a flat plate generate very high heat transfer coefficients in the impingement zone. The magnitude of heat transfer prediction near the stagnation point is important and accurate heat flux distribution are needed. This research studies on heat transfer and flow field resulting from a single hot air impinging wall. The simulation is carried out using computational fluid dynamics (CFD) commercial code FLUENT. Large Eddy Simulation (LES) approach with a subgrid-scale Smagorinsky-Lilly model is present. The classical Werner-Wengle wall model is used to compute the predicted results of velocity and temperature near walls. The Smagorinsky constant in the turbulence model is set to 0.1 and is kept constant throughout the investigation. The hot gas jet impingement on the flat plate with a constant surface temperature is chosen to validate the predicted heat flux results with experimental data. The jet Reynolds number is equal to 20,000 and a fixed jet-to-plate spacing of H/D = 2.0. Nusselt number on the impingement surface is calculated. As predicted by the wall model, the instantaneous computed Nusselt number agree fairly well with experimental data. The largest values of calculated Nusselt number are near the stagnation point and decrease monotonically in the wall jet region. Also, the contour plots of instantaneous values of wall heat flux on a flat plate are captured by LES simulation.

  15. PATH: a lumped-element beam-transport simulation program with space charge

    International Nuclear Information System (INIS)

    Farrell, J.A.

    1983-01-01

    PATH is a group of computer programs for simulating charged-particle beam-transport systems. It was developed for evaluating the effects of some aberrations without a time-consuming integration of trajectories through the system. The beam-transport portion of PATH is derived from the well-known program, DECAY TURTLE. PATH contains all features available in DECAY TURTLE (including the input format) plus additional features such as a more flexible random-ray generator, longitudinal phase space, some additional beamline elements, and space-charge routines. One of the programs also provides a simulation of an Alvarez linear accelerator. The programs, originally written for a CDC 7600 computer system, also are available on a VAX-VMS system. All of the programs are interactive with input prompting for ease of use

  16. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  17. Coupling of Large Eddy Simulations with Meteorological Models to simulate Methane Leaks from Natural Gas Storage Facilities

    Science.gov (United States)

    Prasad, K.

    2017-12-01

    Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and

  18. Prediction of Thermal Environment in a Large Space Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Hyun-Jung Yoon

    2018-02-01

    Full Text Available Since the thermal environment of large space buildings such as stadiums can vary depending on the location of the stands, it is important to divide them into different zones and evaluate their thermal environment separately. The thermal environment can be evaluated using physical values measured with the sensors, but the occupant density of the stadium stands is high, which limits the locations available to install the sensors. As a method to resolve the limitations of installing the sensors, we propose a method to predict the thermal environment of each zone in a large space. We set six key thermal factors affecting the thermal environment in a large space to be predicted factors (indoor air temperature, mean radiant temperature, and clothing and the fixed factors (air velocity, metabolic rate, and relative humidity. Using artificial neural network (ANN models and the outdoor air temperature and the surface temperature of the interior walls around the stands as input data, we developed a method to predict the three thermal factors. Learning and verification datasets were established using STAR CCM+ (2016.10, Siemens PLM software, Plano, TX, USA. An analysis of each model’s prediction results showed that the prediction accuracy increased with the number of learning data points. The thermal environment evaluation process developed in this study can be used to control heating, ventilation, and air conditioning (HVAC facilities in each zone in a large space building with sufficient learning by ANN models at the building testing or the evaluation stage.

  19. A Science Cloud: OneSpaceNet

    Science.gov (United States)

    Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.

    2010-12-01

    Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage

  20. Theory and Simulation of the Physics of Space Charge Dominated Beams

    International Nuclear Information System (INIS)

    Haber, Irving

    2002-01-01

    This report describes modeling of intense electron and ion beams in the space charge dominated regime. Space charge collective modes play an important role in the transport of intense beams over long distances. These modes were first observed in particle-in-cell simulations. The work presented here is closely tied to the University of Maryland Electron Ring (UMER) experiment and has application to accelerators for heavy ion beam fusion

  1. University of Central Florida / Deep Space Industries Asteroid Regolith Simulants

    Science.gov (United States)

    Britt, Daniel; Covey, Steven D.; Schultz, Cody

    2017-10-01

    Introduction: The University of Central Florida (UCF), in partnership with Deep Space Industries (DSI) are working under a NASA Phase 2 SBIR contract to develop and produce a family of asteroid regolith simulants for use in research, engineering, and mission operations testing. We base simulant formulas on the mineralogy, particle size, and physical characteristics of CI, CR, CM, C2, CV, and L-Chondrite meteorites. The advantage in simulating meteorites is that the vast majority of meteoritic materials are common rock forming minerals that are available in commercial quantities. While formulas are guided by the meteorites our approach is one of constrained maximization under the limitations of safety, cost, source materials, and ease of handling. In all cases our goal is to deliver a safe, high fidelity analog at moderate cost.Source Materials, Safety, and Biohazards: A critical factor in any useful simulant is to minimize handling risks for biohazards or toxicity. All the terrestrial materials proposed for these simulants were reviewed for potential toxicity. Of particular interest is the organic component of volatile rich carbonaceous chondrites which contain polycyclic aromatic hydrocarbons (PAHs), some of which are known carcinogens and mutagens. Our research suggests that we can maintain rough chemical fidelity by substituting much safer sub-bituminous coal as our organic analog. A second safety consideration is the choice of serpentine group materials. While most serpentine polymorphs are quite safe we avoid fibrous chrysotile because of its asbestos content. Terrestrial materials identified as inputs for our simulants are common rock forming minerals that are available in commercial quantities. These include olivine, pyroxene, plagioclase feldspar, smectite, serpentine, saponite, pyrite, and magnetite in amounts that are appropriate for each type. For CI's and CR’s, their olivines tend to be Fo100 which is rare on Earth. We have substituted Fo90 olivine

  2. Space construction base control system

    Science.gov (United States)

    1978-01-01

    Aspects of an attitude control system were studied and developed for a large space base that is structurally flexible and whose mass properties change rather dramatically during its orbital lifetime. Topics of discussion include the following: (1) space base orbital pointing and maneuvering; (2) angular momentum sizing of actuators; (3) momentum desaturation selection and sizing; (4) multilevel control technique applied to configuration one; (5) one-dimensional model simulation; (6) N-body discrete coordinate simulation; (7) structural analysis math model formulation; and (8) discussion of control problems and control methods.

  3. Visualising very large phylogenetic trees in three dimensional hyperbolic space

    Directory of Open Access Journals (Sweden)

    Liberles David A

    2004-04-01

    Full Text Available Abstract Background Common existing phylogenetic tree visualisation tools are not able to display readable trees with more than a few thousand nodes. These existing methodologies are based in two dimensional space. Results We introduce the idea of visualising phylogenetic trees in three dimensional hyperbolic space with the Walrus graph visualisation tool and have developed a conversion tool that enables the conversion of standard phylogenetic tree formats to Walrus' format. With Walrus, it becomes possible to visualise and navigate phylogenetic trees with more than 100,000 nodes. Conclusion Walrus enables desktop visualisation of very large phylogenetic trees in 3 dimensional hyperbolic space. This application is potentially useful for visualisation of the tree of life and for functional genomics derivatives, like The Adaptive Evolution Database (TAED.

  4. Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2013-01-01

    Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.

  5. Large signal simulation of photonic crystal Fano laser

    DEFF Research Database (Denmark)

    Zali, Aref Rasoulzadeh; Yu, Yi; Moravvej-Farshi, Mohammad Kazem

    2017-01-01

    be modulated at frequencies exceeding 1 THz which is much higher than its corresponding relaxation oscillation frequency. Large signal simulation of the Fano laser is also investigated based on pseudorandom bit sequence at 0.5 Tbit/s. It shows eye patterns are open at such high modulation frequency, verifying...

  6. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  7. arXiv Stochastic locality and master-field simulations of very large lattices

    CERN Document Server

    Lüscher, Martin

    2018-01-01

    In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.

  8. The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU

    Science.gov (United States)

    Lara, A.; Niembro, T.

    2017-12-01

    We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.

  9. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  10. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    Science.gov (United States)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  11. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  12. Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission

    Science.gov (United States)

    Kuzmicz-Cieslak, M.; Pavlis, E. C.

    2011-12-01

    The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.

  13. Extraterrestrial processing and manufacturing of large space systems. Volume 3: Executive summary

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Facilities and equipment are defined for refining processes to commercial grade of lunar material that is delivered to a 'space manufacturing facility' in beneficiated, primary processed quality. The manufacturing facilities and the equipment for producing elements of large space systems from these materials and providing programmatic assessments of the concepts are also defined. In-space production processes of solar cells (by vapor deposition) and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, converters, and others are described.

  14. Space, the Final Frontier”: How Good are Agent-Based Models at Simulating Individuals and Space in Cities?

    Directory of Open Access Journals (Sweden)

    Alison Heppenstall

    2016-01-01

    Full Text Available Cities are complex systems, comprising of many interacting parts. How we simulate and understand causality in urban systems is continually evolving. Over the last decade the agent-based modeling (ABM paradigm has provided a new lens for understanding the effects of interactions of individuals and how through such interactions macro structures emerge, both in the social and physical environment of cities. However, such a paradigm has been hindered due to computational power and a lack of large fine scale datasets. Within the last few years we have witnessed a massive increase in computational processing power and storage, combined with the onset of Big Data. Today geographers find themselves in a data rich era. We now have access to a variety of data sources (e.g., social media, mobile phone data, etc. that tells us how, and when, individuals are using urban spaces. These data raise several questions: can we effectively use them to understand and model cities as complex entities? How well have ABM approaches lent themselves to simulating the dynamics of urban processes? What has been, or will be, the influence of Big Data on increasing our ability to understand and simulate cities? What is the appropriate level of spatial analysis and time frame to model urban phenomena? Within this paper we discuss these questions using several examples of ABM applied to urban geography to begin a dialogue about the utility of ABM for urban modeling. The arguments that the paper raises are applicable across the wider research environment where researchers are considering using this approach.

  15. Just-in-time connectivity for large spiking networks.

    Science.gov (United States)

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  16. Comparison of reynolds averaged navier stokes based simulation and large eddy simulation for one isothermal swirling flow

    DEFF Research Database (Denmark)

    Yang, Yang; Kær, Søren Knudsen

    2012-01-01

    The flow structure of one isothermal swirling case in the Sydney swirl flame database was studied using two numerical methods. Results from the Reynolds-averaged Navier-Stokes (RANS) approach and large eddy simulation (LES) were compared with experimental measurements. The simulations were applied...

  17. Capabilities of a Laser Guide Star for a Large Segmented Space Telescope

    Science.gov (United States)

    Clark, James R.; Carlton, Ashley; Douglas, Ewan S.; Males, Jared R.; Lumbres, Jennifer; Feinberg, Lee; Guyon, Olivier; Marlow, Weston; Cahoy, Kerri L.

    2018-01-01

    Large segmented mirror telescopes are planned for future space telescope missions such as LUVOIR (Large UV Optical Infrared Surveyor) to enable the improvement in resolution and contrast necessary to directly image Earth-like exoplanets, in addition to making contributions to general astrophysics. The precision surface control of these complex, large optical systems, which may have over a hundred meter-sized segments, is a challenge. Our initial simulations show that imaging a star of 2nd magnitude or brighter with a Zernike wavefront sensor should relax the segment stability requirements by factors between 10 and 50 depending on the wavefront control strategy. Fewer than fifty stars brighter than magnitude 2 can be found in the sky. A laser guide star (LGS) on a companion spacecraft will allow the telescope to target a dimmer science star and achieve wavefront control to the required stability without requiring slew or repointing maneuvers.We present initial results for one possible mission architecture, with a LGS flying at 100,000 km range from the large telescope in an L2 halo orbit, using a laser transmit power of 8 days) for an expenditure of system, it can be accommodated in a 6U CubeSat bus, but may require an extended period of time to transition between targets and match velocities with the telescope (e.g. 6 days to transit 10 degrees). If the LGS uses monopropellant propulsion, it must use at least a 27U bus to achieve the the same delta-V capability, but can transition between targets much more rapidly (flight are being refined. A low-cost prototype mission (e.g. between a small satellite in LEO and an LGS in GEO) to validate the feasibility is in development.

  18. Large Eddy Simulation of Turbulent Flows in Wind Energy

    DEFF Research Database (Denmark)

    Chivaee, Hamid Sarlak

    This research is devoted to the Large Eddy Simulation (LES), and to lesser extent, wind tunnel measurements of turbulent flows in wind energy. It starts with an introduction to the LES technique associated with the solution of the incompressible Navier-Stokes equations, discretized using a finite......, should the mesh resolution, numerical discretization scheme, time averaging period, and domain size be chosen wisely. A thorough investigation of the wind turbine wake interactions is also conducted and the simulations are validated against available experimental data from external sources. The effect...... Reynolds numbers, and thereafter, the fully-developed infinite wind farm boundary later simulations are performed. Sources of inaccuracy in the simulations are investigated and it is found that high Reynolds number flows are more sensitive to the choice of the SGS model than their low Reynolds number...

  19. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  20. Analysis of large optical ground stations for deep-space optical communications

    Science.gov (United States)

    Garcia-Talavera, M. Reyes; Rivera, C.; Murga, G.; Montilla, I.; Alonso, A.

    2017-11-01

    Inter-satellite and ground to satellite optical communications have been successfully demonstrated over more than a decade with several experiments, the most recent being NASA's lunar mission Lunar Atmospheric Dust Environment Explorer (LADEE). The technology is in a mature stage that allows to consider optical communications as a high-capacity solution for future deep-space communications [1][2], where there is an increasing demand on downlink data rate to improve science return. To serve these deep-space missions, suitable optical ground stations (OGS) have to be developed providing large collecting areas. The design of such OGSs must face both technical and cost constraints in order to achieve an optimum implementation. To that end, different approaches have already been proposed and analyzed, namely, a large telescope based on a segmented primary mirror, telescope arrays, and even the combination of RF and optical receivers in modified versions of existing Deep-Space Network (DSN) antennas [3][4][5]. Array architectures have been proposed to relax some requirements, acting as one of the key drivers of the present study. The advantages offered by the array approach are attained at the expense of adding subsystems. Critical issues identified for each implementation include their inherent efficiency and losses, as well as its performance under high-background conditions, and the acquisition, pointing, tracking, and synchronization capabilities. It is worth noticing that, due to the photon-counting nature of detection, the system performance is not solely given by the signal-to-noise ratio parameter. To start with the analysis, first the main implications of the deep space scenarios are summarized, since they are the driving requirements to establish the technical specifications for the large OGS. Next, both the main characteristics of the OGS and the potential configuration approaches are presented, getting deeper in key subsystems with strong impact in the

  1. Utilization of Large Cohesive Interface Elements for Delamination Simulation

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lund, Erik

    2012-01-01

    This paper describes the difficulties of utilizing large interface elements in delamination simulation. Solutions to increase the size of applicable interface elements are described and cover numerical integration of the element and modifications of the cohesive law....

  2. Coupled radiative gasdynamic interaction and non-equilibrium dissociation for large-scale returned space vehicles

    International Nuclear Information System (INIS)

    Surzhikov, S.

    2012-01-01

    Graphical abstract: It has been shown that different coupled vibrational dissociation models, being applied for solving coupled radiative gasdynamic problems for large size space vehicles, exert noticeable effect on radiative heating of its surface at orbital entry on high altitudes (h ⩾ 70 km). This influence decreases with decreasing the space vehicles sizes. Figure shows translational (solid lines) and vibrational (dashed lines) temperatures in shock layer with (circle markers) and without (triangles markers) radiative-gasdynamic interaction for one trajectory point of entering space vehicle. Highlights: ► Nonequilibrium dissociation processes exert effect on radiation heating of space vehicles (SV). ► The radiation gas dynamic interaction enhances this influence. ► This influence increases with increasing the SV sizes. - Abstract: Radiative aerothermodynamics of large-scale space vehicles is considered for Earth orbital entry at zero angle of attack. Brief description of used radiative gasdynamic model of physically and chemically nonequilibrium, viscous, heat conductive and radiative gas of complex chemical composition is presented. Radiation gasdynamic (RadGD) interaction in high temperature shock layer is studied by means of numerical experiment. It is shown that radiation–gasdynamic coupling for orbital space vehicles of large size is important for high altitude part of entering trajectory. It is demonstrated that the use of different models of coupled vibrational dissociation (CVD) in conditions of RadGD interaction gives rise temperature variation in shock layer and, as a result, leads to significant variation of radiative heating of space vehicle.

  3. Coupled large-eddy simulation of thermal mixing in a T-junction

    International Nuclear Information System (INIS)

    Kloeren, D.; Laurien, E.

    2011-01-01

    Analyzing thermal fatigue due to thermal mixing in T-junctions is part of the safety assessment of nuclear power plants. Results of two large-eddy simulations of mixing flow in a T-junction with coupled and adiabatic boundary condition are presented and compared. The temperature difference is set to 100 K, which leads to strong stratification of the flow. The main and the branch pipe intersect horizontally in this simulation. The flow is characterized by steady wavy pattern of stratification and temperature distribution. The coupled solution approach shows highly reduced temperature fluctuations in the near wall region due to thermal inertia of the wall. A conjugate heat transfer approach is necessary in order to simulate unsteady heat transfer accurately for large inlet temperature differences. (author)

  4. Extraterrestrial processing and manufacturing of large space systems, volume 1, chapters 1-6

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Space program scenarios for production of large space structures from lunar materials are defined. The concept of the space manufacturing facility (SMF) is presented. The manufacturing processes and equipment for the SMF are defined and the conceptual layouts are described for the production of solar cells and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, and converters. A 'reference' SMF was designed and its operation requirements are described.

  5. General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers

    Energy Technology Data Exchange (ETDEWEB)

    Radice, David, E-mail: dradice@astro.princeton.edu [Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540 (United States)

    2017-03-20

    The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrino emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.

  6. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  7. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  8. Heavy-Ion Collimation at the Large Hadron Collider Simulations and Measurements

    CERN Document Server

    AUTHOR|(CDS)2083002; Wessels, Johannes Peter; Bruce, Roderik; Wessels, Johannes Peter; Bruce, Roderik

    The CERN Large Hadron Collider (LHC) stores and collides proton and $^{208}$Pb$^{82+}$ beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with t...

  9. Large-scale numerical simulations of star formation put to the test

    DEFF Research Database (Denmark)

    Frimann, Søren; Jørgensen, Jes Kristian; Haugbølle, Troels

    2016-01-01

    (SEDs), calculated from large-scalenumerical simulations, to observational studies, thereby aiding in boththe interpretation of the observations and in testing the fidelity ofthe simulations. Methods: The adaptive mesh refinement code,RAMSES, is used to simulate the evolution of a 5 pc × 5 pc ×5 pc...... to calculate evolutionary tracers Tbol andLsmm/Lbol. It is shown that, while the observeddistributions of the tracers are well matched by the simulation, theygenerally do a poor job of tracking the protostellar ages. Disks formearly in the simulation, with 40% of the Class 0 protostars beingencircled by one...

  10. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  11. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  12. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  13. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  14. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  15. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  16. Influence of grid aspect ratio on planetary boundary layer turbulence in large-eddy simulations

    Directory of Open Access Journals (Sweden)

    S. Nishizawa

    2015-10-01

    Full Text Available We examine the influence of the grid aspect ratio of horizontal to vertical grid spacing on turbulence in the planetary boundary layer (PBL in a large-eddy simulation (LES. In order to clarify and distinguish them from other artificial effects caused by numerical schemes, we used a fully compressible meteorological LES model with a fully explicit scheme of temporal integration. The influences are investigated with a series of sensitivity tests with parameter sweeps of spatial resolution and grid aspect ratio. We confirmed that the mixing length of the eddy viscosity and diffusion due to sub-grid-scale turbulence plays an essential role in reproducing the theoretical −5/3 slope of the energy spectrum. If we define the filter length in LES modeling based on consideration of the numerical scheme, and introduce a corrective factor for the grid aspect ratio into the mixing length, the theoretical slope of the energy spectrum can be obtained; otherwise, spurious energy piling appears at high wave numbers. We also found that the grid aspect ratio has influence on the turbulent statistics, especially the skewness of the vertical velocity near the top of the PBL, which becomes spuriously large with large aspect ratio, even if a reasonable spectrum is obtained.

  17. Keeping it real: revisiting a real-space approach to running ensembles of cosmological N-body simulations

    International Nuclear Information System (INIS)

    Orban, Chris

    2013-01-01

    In setting up initial conditions for ensembles of cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to real-space statistics and allowing the DC mode (i.e. overdensity) to vary from box to box as it would in the real universe. As a stringent test of both approaches, I perform ensembles of simulations using power law and a ''powerlaw times a bump'' model inspired by baryon acoustic oscillations (BAO), exploiting the self-similarity of these initial conditions to quantify the accuracy of the matter-matter two-point correlation results. The real-space method, which was originally proposed by Pen 1997 [1] and implemented by Sirko 2005 [2], performed well in producing the expected self-similar behavior and corroborated the non-linear evolution of the BAO feature observed in conventional simulations, even in the strongly-clustered regime (σ 8 ∼>1). In revisiting the real-space method championed by [2], it was also noticed that this earlier study overlooked an important integral constraint correction to the correlation function in results from the conventional approach that can be important in ΛCDM simulations with L box ∼ −1 Gpc and on scales r∼>L box /10. Rectifying this issue shows that the fourier space and real space methods are about equally accurate and efficient for modeling the evolution and growth of the correlation function, contrary to previous claims. An appendix provides a useful independent-of-epoch analytic formula for estimating the importance of the integral constraint bias on correlation function measurements in ΛCDM simulations

  18. Towards Large Eddy Simulation of gas turbine compressors

    Science.gov (United States)

    McMullan, W. A.; Page, G. J.

    2012-07-01

    With increasing computing power, Large Eddy Simulation could be a useful simulation tool for gas turbine axial compressor design. This paper outlines a series of simulations performed on compressor geometries, ranging from a Controlled Diffusion Cascade stator blade to the periodic sector of a stage in a 3.5 stage axial compressor. The simulation results show that LES may offer advantages over traditional RANS methods when off-design conditions are considered - flow regimes where RANS models often fail to converge. The time-dependent nature of LES permits the resolution of transient flow structures, and can elucidate new mechanisms of vorticity generation on blade surfaces. It is shown that accurate LES is heavily reliant on both the near-wall mesh fidelity and the ability of the imposed inflow condition to recreate the conditions found in the reference experiment. For components embedded in a compressor this requires the generation of turbulence fluctuations at the inlet plane. A recycling method is developed that improves the quality of the flow in a single stage calculation of an axial compressor, and indicates that future developments in both the recycling technique and computing power will bring simulations of axial compressors within reach of industry in the coming years.

  19. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    Science.gov (United States)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that

  20. Validated simulator for space debris removal with nets and other flexible tethers applications

    Science.gov (United States)

    Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil

    2016-12-01

    In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and

  1. Radiation risk in space exploration

    International Nuclear Information System (INIS)

    Schimmerling, W.; Wilson, J.W.; Cucinotta, F.; Kim, M.H.Y.

    1997-01-01

    Humans living and working in space are exposed to energetic charged particle radiation due to galactic cosmic rays and solar particle emissions. In order to keep the risk due to radiation exposure of astronauts below acceptable levels, the physical interaction of these particles with space structures and the biological consequences for crew members need to be understood. Such knowledge is, to a large extent, very sparse when it is available at all. Radiation limits established for space radiation protection purposes are based on extrapolation of risk from Japanese survivor data, and have been found to have large uncertainties. In space, attempting to account for large uncertainties by worst-case design results in excessive costs and accurate risk prediction is essential. It is best developed at ground-based laboratories, using particle accelerator beams to simulate individual components of space radiation. Development of mechanistic models of the action of space radiation is expected to lead to the required improvements in the accuracy of predictions, to optimization of space structures for radiation protection and, eventually, to the development of biological methods of prevention and intervention against radiation injury. (author)

  2. Large-eddy simulation of atmospheric flow over complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Bechmann, A.

    2006-11-15

    The present report describes the development and validation of a turbulence model designed for atmospheric flows based on the concept of Large-Eddy Simulation (LES). The background for the work is the high Reynolds number k - epsilon model, which has been implemented on a finite-volume code of the incompressible Reynolds-averaged Navier-Stokes equations (RANS). The k - epsilon model is traditionally used for RANS computations, but is here developed to also enable LES. LES is able to provide detailed descriptions of a wide range of engineering flows at low Reynolds numbers. For atmospheric flows, however, the high Reynolds numbers and the rough surface of the earth provide difficulties normally not compatible with LES. Since these issues are most severe near the surface they are addressed by handling the near surface region with RANS and only use LES above this region. Using this method, the developed turbulence model is able to handle both engineering and atmospheric flows and can be run in both RANS or LES mode. For LES simulations a time-dependent wind field that accurately represents the turbulent structures of a wind environment must be prescribed at the computational inlet. A method is implemented where the turbulent wind field from a separate LES simulation can be used as inflow. To avoid numerical dissipation of turbulence special care is paid to the numerical method, e.g. the turbulence model is calibrated with the specific numerical scheme used. This is done by simulating decaying isotropic and homogeneous turbulence. Three atmospheric test cases are investigated in order to validate the behavior of the presented turbulence model. Simulation of the neutral atmospheric boundary layer, illustrates the turbulence model ability to generate and maintain the turbulent structures responsible for boundary layer transport processes. Velocity and turbulence profiles are in good agreement with measurements. Simulation of the flow over the Askervein hill is also

  3. A Coordinated Initialization Process for the Distributed Space Exploration Simulation

    Science.gov (United States)

    Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David

    2007-01-01

    A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions

  4. Space Debris Attitude Simulation - IOTA (In-Orbit Tumbling Analysis)

    Science.gov (United States)

    Kanzler, R.; Schildknecht, T.; Lips, T.; Fritsche, B.; Silha, J.; Krag, H.

    Today, there is little knowledge on the attitude state of decommissioned intact objects in Earth orbit. Observational means have advanced in the past years, but are still limited with respect to an accurate estimate of motion vector orientations and magnitude. Especially for the preparation of Active Debris Removal (ADR) missions as planned by ESA's Clean Space initiative or contingency scenarios for ESA spacecraft like ENVISAT, such knowledge is needed. The In-Orbit Tumbling Analysis tool (IOTA) is a prototype software, currently in development within the framework of ESA's “Debris Attitude Motion Measurements and Modelling” project (ESA Contract No. 40000112447), which is led by the Astronomical Institute of the University of Bern (AIUB). The project goal is to achieve a good understanding of the attitude evolution and the considerable internal and external effects which occur. To characterize the attitude state of selected targets in LEO and GTO, multiple observation methods are combined. Optical observations are carried out by AIUB, Satellite Laser Ranging (SLR) is performed by the Space Research Institute of the Austrian Academy of Sciences (IWF) and radar measurements and signal level determination are provided by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR). Developed by Hyperschall Technologie Göttingen GmbH (HTG), IOTA will be a highly modular software tool to perform short- (days), medium- (months) and long-term (years) propagation of the orbit and attitude motion (six degrees-of-freedom) of spacecraft in Earth orbit. The simulation takes into account all relevant acting forces and torques, including aerodynamic drag, solar radiation pressure, gravitational influences of Earth, Sun and Moon, eddy current damping, impulse and momentum transfer from space debris or micro meteoroid impact, as well as the optional definition of particular spacecraft specific influences like tank sloshing, reaction wheel behaviour

  5. Large-eddy simulation of atmospheric flow over complex terrain

    DEFF Research Database (Denmark)

    Bechmann, Andreas

    2007-01-01

    The present report describes the development and validation of a turbulence model designed for atmospheric flows based on the concept of Large-Eddy Simulation (LES). The background for the work is the high Reynolds number k - #epsilon# model, which has been implemented on a finite-volume code...... turbulence model is able to handle both engineering and atmospheric flows and can be run in both RANS or LES mode. For LES simulations a time-dependent wind field that accurately represents the turbulent structures of a wind environment must be prescribed at the computational inlet. A method is implemented...... where the turbulent wind field from a separate LES simulation can be used as inflow. To avoid numerical dissipation of turbulence special care is paid to the numerical method, e.g. the turbulence model is calibrated with the specific numerical scheme used. This is done by simulating decaying isotropic...

  6. Simulation of transients with space-dependent feedback by coarse mesh flux expansion method

    International Nuclear Information System (INIS)

    Langenbuch, S.; Maurer, W.; Werner, W.

    1975-01-01

    For the simulation of the time-dependent behaviour of large LWR-cores, even the most efficient Finite-Difference (FD) methods require a prohibitive amount of computing time in order to achieve results of acceptable accuracy. Static CM-solutions computed with a mesh-size corresponding to the fuel element structure (about 20 cm) are at least as accurate as FD-solutions computed with about 5 cm mesh-size. For 3d-calculations this results in a reduction of storage requirements by a factor 60 and of computing costs by a factor 40, relative to FD-methods. These results have been obtained for pure neutronic calculations, where feedback is not taken into account. In this paper it is demonstrated that the method retains its accuracy also in kinetic calculations, even in the presence of strong space dependent feedback. (orig./RW) [de

  7. Large Eddy Simulation (LES for IC Engine Flows

    Directory of Open Access Journals (Sweden)

    Kuo Tang-Wei

    2013-10-01

    Full Text Available Numerical computations are carried out using an engineering-level Large Eddy Simulation (LES model that is provided by a commercial CFD code CONVERGE. The analytical framework and experimental setup consist of a single cylinder engine with Transparent Combustion Chamber (TCC under motored conditions. A rigorous working procedure for comparing and analyzing the results from simulation and high speed Particle Image Velocimetry (PIV experiments is documented in this work. The following aspects of LES are analyzed using this procedure: number of cycles required for convergence with adequate accuracy; effect of mesh size, time step, sub-grid-scale (SGS turbulence models and boundary condition treatments; application of the proper orthogonal decomposition (POD technique.

  8. A large-eddy simulation study of wake propagation and power production in an array of tidal-current turbines.

    Science.gov (United States)

    Churchfield, Matthew J; Li, Ye; Moriarty, Patrick J

    2013-02-28

    This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally periodic precursor simulation is performed to create turbulent flow data. Then those data are used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modelled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. We found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement. For example, using a larger precursor domain would better capture elongated turbulent structures, and including salinity and temperature equations would account for density stratification and its effect on turbulence. Additionally, the wall shear stress modelling could be improved, and more array configurations could be examined.

  9. Dynamic modelling and simulation of complex drive systems of large belt conveyors; Dynamische Modellierung und Simulation komplexer Antriebssysteme von Grossbandanlagen

    Energy Technology Data Exchange (ETDEWEB)

    Burgwinkel, Paul; Vreydal, Daniel; Eltaliawi, Gamil; Vijayakumar, Nandhakumar [RWTH Aachen (DE). Inst. fuer Maschinentechnik der Rohstoffindustrie (IMR)

    2010-09-15

    For the first time the Co-simulation method was successfully used for full representation of a large belt conveyor for an open cast mine in a simulation model at the Institute for Mechanical Engineering in the Raw Materials Industry at Rhineland-Westphalia Technological University in Aachen. The aim of this project was the development of an electro-mechanical simulation model, which represents all components of a large belt conveyor from the drive motor to the conveyor belt in one simulation model and thus makes the interactions between the individual assemblies verifiable by calculations. With the aid of the developed model it was possible to determine critical operating speeds of the represented large belt conveyor and derive suitable measures to combat undesirable resonance states in the drive assembly. Furthermore it was possible to clarify the advantage of the full numerical representation of an electromechanical drive system. (orig.)

  10. Large-eddy simulation with accurate implicit subgrid-scale diffusion

    NARCIS (Netherlands)

    B. Koren (Barry); C. Beets

    1996-01-01

    textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is

  11. Large Eddy Simulations of Severe Convection Induced Turbulence

    Science.gov (United States)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  12. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  13. Assembly considerations for large reflectors

    Science.gov (United States)

    Bush, H.

    1988-01-01

    The technologies developed at LaRC in the area of erectable instructures are discussed. The information is of direct value to the Large Deployable Reflector (LDR) because an option for the LDR backup structure is to assemble it in space. The efforts in this area, which include development of joints, underwater assembly simulation tests, flight assembly/disassembly tests, and fabrication of 5-meter trusses, led to the use of the LaRC concept as the baseline configuration for the Space Station Structure. The Space Station joint is linear in the load and displacement range of interest to Space Station; the ability to manually assemble and disassemble a 45-foot truss structure was demonstrated by astronauts in space as part of the ACCESS Shuttle Flight Experiment. The structure was built in 26 minutes 46 seconds, and involved a total of 500 manipulations of untethered hardware. Also, the correlation of the space experience with the neutral buoyancy simulation was very good. Sections of the proposed 5-meter bay Space Station truss have been built on the ground. Activities at LaRC have included the development of mobile remote manipulator systems (which can traverse the Space Station 5-meter structure), preliminary LDR sun shield concepts, LDR construction scenarios, and activities in robotic assembly of truss-type structures.

  14. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  15. Simulation analysis of impulse characteristics of space debris irradiated by multi-pulse laser

    Science.gov (United States)

    Lin, Zhengguo; Jin, Xing; Chang, Hao; You, Xiangyu

    2018-02-01

    Cleaning space debris with laser is a hot topic in the field of space security research. Impulse characteristics are the basis of cleaning space debris with laser. In order to study the impulse characteristics of rotating irregular space debris irradiated by multi-pulse laser, the impulse calculation method of rotating space debris irradiated by multi-pulse laser is established based on the area matrix method. The calculation method of impulse and impulsive moment under multi-pulse irradiation is given. The calculation process of total impulse under multi-pulse irradiation is analyzed. With a typical non-planar space debris (cube) as example, the impulse characteristics of space debris irradiated by multi-pulse laser are simulated and analyzed. The effects of initial angular velocity, spot size and pulse frequency on impulse characteristics are investigated.

  16. The large dimension limit of a small black hole instability in anti-de Sitter space

    Science.gov (United States)

    Herzog, Christopher P.; Kim, Youngshin

    2018-02-01

    We study the dynamics of a black hole in an asymptotically AdS d × S d space-time in the limit of a large number of dimensions, d → ∞. Such a black hole is known to become dynamically unstable below a critical radius. We derive the dispersion relation for the quasinormal mode that governs this instability in an expansion in 1 /d. We also provide a full nonlinear analysis of the instability at leading order in 1 /d. We find solutions that resemble the lumpy black spots and black belts previously constructed numerically for small d, breaking the SO( d + 1) rotational symmetry of the sphere down to SO( d). We are also able to follow the time evolution of the instability. Due possibly to limitations in our analysis, our time dependent simulations do not settle down to stationary solutions. This work has relevance for strongly interacting gauge theories; through the AdS/CFT correspondence, the special case d = 5 corresponds to maximally supersymmetric Yang-Mills theory on a spatial S 3 in the microcanonical ensemble and in a strong coupling and large number of colors limit.

  17. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    Science.gov (United States)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  18. How to simulate global cosmic strings with large string tension

    Energy Technology Data Exchange (ETDEWEB)

    Klaer, Vincent B.; Moore, Guy D., E-mail: vklaer@theorie.ikp.physik.tu-darmstadt.de, E-mail: guy.moore@physik.tu-darmstadt.de [Institut für Kernphysik, Technische Universität Darmstadt, Schlossgartenstraße 2, Darmstadt, D-64289 Germany (Germany)

    2017-10-01

    Global string networks may be relevant in axion production in the early Universe, as well as other cosmological scenarios. Such networks contain a large hierarchy of scales between the string core scale and the Hubble scale, ln( f {sub a} / H ) ∼ 70, which influences the network dynamics by giving the strings large tensions T ≅ π f {sub a} {sup 2} ln( f {sub a} / H ). We present a new numerical approach to simulate such global string networks, capturing the tension without an exponentially large lattice.

  19. Large Eddy Simulation of the spray formation in confinements

    International Nuclear Information System (INIS)

    Lampa, A.; Fritsching, U.

    2013-01-01

    Highlights: • Process stability of confined spray processes is affected by the geometric design of the spray confinement. • LES simulations of confined spray flow have been performed successfully. • Clustering processes of droplets is predicted in simulations and validated with experiments. • Criteria for specific coherent gas flow patterns and droplet clustering behaviour are found. -- Abstract: The particle and powder properties produced within spray drying processes are influenced by various unsteady transport phenomena in the dispersed multiphase spray flow in a confined spray chamber. In this context differently scaled spray structures in a confined spray environment have been analyzed in experiments and numerical simulations. The experimental investigations have been carried out with Particle-Image-Velocimetry to determine the velocity of the gas and the discrete phase. Large-Eddy-Simulations have been set up to predict the transient behaviour of the spray process and have given more insight into the sensitivity of the spray flow structures in dependency from the spray chamber design

  20. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  1. Large-eddy simulations for turbulent flows

    International Nuclear Information System (INIS)

    Husson, S.

    2007-07-01

    The aim of this work is to study the impact of thermal gradients on a turbulent channel flow with imposed wall temperatures and friction Reynolds numbers of 180 and 395. In this configuration, temperature variations can be strong and induce significant variations of the fluid properties. We consider the low Mach number equations and carry out large eddy simulations. We first validate our simulations thanks to comparisons of some of our LES results with DNS data. Then, we investigate the influence of the variations of the conductivity and the viscosity and show that we can assume these properties constant only for weak temperature gradients. We also study the thermal sub-grid-scale modelling and find no difference when the sub-grid-scale Prandtl number is taken constant or dynamically calculated. The analysis of the effects of strongly increasing the temperature ratio mainly shows a dissymmetry of the profiles. The physical mechanism responsible of these modifications is explained. Finally, we use semi-local scaling and the Van Driest transformation and we show that they lead to a better correspondence of the low and high temperature ratios profiles. (author)

  2. Large eddy simulations of an airfoil in turbulent inflow

    DEFF Research Database (Denmark)

    Gilling, Lasse; Sørensen, Niels N.

    2008-01-01

    Wind turbines operate in the turbulent boundary layer of the atmosphere and due to the rotational sampling effect the blades experience a high level of turbulence [1]. In this project the effect of turbulence is investigated by large eddy simulations of the turbulent flow past a NACA 0015 airfoil...

  3. Simulated effects of host fish distribution on juvenile unionid mussel dispersal in a large river

    Science.gov (United States)

    Daraio, J.A.; Weber, L.J.; Zigler, S.J.; Newton, T.J.; Nestler, J.M.

    2012-01-01

    Larval mussels (Family Unionidae) are obligate parasites on fish, and after excystment from their host, as juveniles, they are transported with flow. We know relatively little about the mechanisms that affect dispersal and subsequent settlement of juvenile mussels in large rivers. We used a three-dimensional hydrodynamic model of a reach of the Upper Mississippi River with stochastic Lagrangian particle tracking to simulate juvenile dispersal. Sensitivity analyses were used to determine the importance of excystment location in two-dimensional space (lateral and longitudinal) and to assess the effects of vertical location (depth in the water column) on dispersal distances and juvenile settling distributions. In our simulations, greater than 50% of juveniles mussels settled on the river bottom within 500 m of their point of excystment, regardless of the vertical location of the fish in the water column. Dispersal distances were most variable in environments with higher velocity and high gradients in velocity, such as along channel margins, near the channel bed, or where effects of river bed morphology caused large changes in hydraulics. Dispersal distance was greater and variance was greater when juvenile excystment occurred in areas where vertical velocity (w) was positive (indicating an upward velocity) than when w was negative. Juvenile dispersal distance is likely to be more variable for mussels species whose hosts inhabit areas with steeper velocity gradients (e.g. channel margins) than a host that generally inhabits low-flow environments (e.g. impounded areas).

  4. Very large eddy simulation of the Red Sea overflow

    Science.gov (United States)

    Ilıcak, Mehmet; Özgökmen, Tamay M.; Peters, Hartmut; Baumert, Helmut Z.; Iskandarani, Mohamed

    Mixing between overflows and ambient water masses is a critical problem of deep-water mass formation in the downwelling branch of the meridional overturning circulation of the ocean. Modeling approaches that have been tested so far rely either on algebraic parameterizations in hydrostatic ocean circulation models, or on large eddy simulations that resolve most of the mixing using nonhydrostatic models. In this study, we examine the performance of a set of turbulence closures, that have not been tested in comparison to observational data for overflows before. We employ the so-called very large eddy simulation (VLES) technique, which allows the use of k-ɛ models in nonhydrostatic models. This is done by applying a dynamic spatial filtering to the k-ɛ equations. To our knowledge, this is the first time that the VLES approach is adopted for an ocean modeling problem. The performance of k-ɛ and VLES models are evaluated by conducting numerical simulations of the Red Sea overflow and comparing them to observations from the Red Sea Outflow Experiment (REDSOX). The computations are constrained to one of the main channels transporting the overflow, which is narrow enough to permit the use of a two-dimensional (and nonhydrostatic) model. A large set of experiments are conducted using different closure models, Reynolds numbers and spatial resolutions. It is found that, when no turbulence closure is used, the basic structure of the overflow, consisting of a well-mixed bottom layer (BL) and entraining interfacial layer (IL), cannot be reproduced. The k-ɛ model leads to unrealistic thicknesses for both BL and IL, while VLES results in the most realistic reproduction of the REDSOX observations.

  5. Realizability conditions for the turbulent stress tensor in large-eddy simulation

    NARCIS (Netherlands)

    Vreman, A.W.; Geurts, Bernardus J.; Kuerten, Johannes G.M.

    1994-01-01

    The turbulent stress tensor in large-eddy simulation is examined from a theoretical point of view. Realizability conditions for the components of this tensor are derived, which hold if and only if the filter function is positive. The spectral cut-off, one of the filters frequently used in large-eddy

  6. Three-dimensional two-fluid Braginskii simulations of the large plasma device

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, Dustin M., E-mail: dustin.m.fisher.gr@dartmouth.edu; Rogers, Barrett N., E-mail: barrett.rogers@dartmouth.edu [Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire 03755 (United States); Rossi, Giovanni D.; Guice, Daniel S.; Carter, Troy A. [Department of Physics and Astronomy, University of California, Los Angeles, California 90095-1547 (United States)

    2015-09-15

    The Large Plasma Device (LAPD) is modeled using the 3D Global Braginskii Solver code. Comparisons to experimental measurements are made in the low-bias regime in which there is an intrinsic E × B rotation of the plasma. In the simulations, this rotation is caused primarily by sheath effects and may be a likely mechanism for the intrinsic rotation seen in LAPD. Simulations show strong qualitative agreement with the data, particularly the radial dependence of the density fluctuations, cross-correlation lengths, radial flux dependence outside of the cathode edge, and camera imagery. Kelvin Helmholtz (KH) turbulence at relatively large scales is the dominant driver of cross-field transport in these simulations with smaller-scale drift waves and sheath modes playing a secondary role. Plasma holes and blobs arising from KH vortices in the simulations are consistent with the scale sizes and overall appearance of those in LAPD camera images. The addition of ion-neutral collisions in the simulations at previously theorized values reduces the radial particle flux by about a factor of two, from values that are somewhat larger than the experimentally measured flux to values that are somewhat lower than the measurements. This reduction is due to a modest stabilizing contribution of the collisions on the KH-modes driving the turbulent transport.

  7. A logistics model for large space power systems

    Science.gov (United States)

    Koelle, H. H.

    Space Power Systems (SPS) have to overcome two hurdles: (1) to find an attractive design, manufacturing and assembly concept and (2) to have available a space transportation system that can provide economical logistic support during the construction and operational phases. An initial system feasibility study, some five years ago, was based on a reference system that used terrestrial resources only and was based partially on electric propulsion systems. The conclusion was: it is feasible but not yet economically competitive with other options. This study is based on terrestrial and extraterrestrial resources and on chemical (LH 2/LOX) propulsion systems. These engines are available from the Space Shuttle production line and require small changes only. Other so-called advanced propulsion systems investigated did not prove economically superior if lunar LOX is available! We assume that a Shuttle derived Heavy Lift Launch Vehicle (HLLV) will become available around the turn of the century and that this will be used to establish a research base on the lunar surface. This lunar base has the potential to grow into a lunar factory producing LOX and construction materials for supporting among other projects also the construction of space power systems in geostationary orbit. A model was developed to simulate the logistics support of such an operation for a 50-year life cycle. After 50 years 111 SPS units with 5 GW each and an availability of 90% will produce 100 × 5 = 500 GW. The model comprises 60 equations and requires 29 assumptions of the parameter involved. 60-state variables calculated with the 60 equations mentioned above are given on an annual basis and as averages for the 50-year life cycle. Recycling of defective parts in geostationary orbit is one of the features of the model. The state-of-the-art with respect to SPS technology is introduced as a variable Mg mass/MW electric power delivered. If the space manufacturing facility, a maintenance and repair facility

  8. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  9. Multiscale Data Assimilation for Large-Eddy Simulations

    Science.gov (United States)

    Li, Z.; Cheng, X.; Gustafson, W. I., Jr.; Xiao, H.; Vogelmann, A. M.; Endo, S.; Toto, T.

    2017-12-01

    Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence, boundary layer physics and cloud development, and there is a great need for developing data assimilation methodologies that can constrain LES models. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES. The LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso) is generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. One of major objectives of LASSO is to develop the capability to observationally constrain LES using a hierarchy of ARM observations. We have implemented a multiscale data assimilation (MSDA) scheme, which allows data assimilation to be implemented separately for distinct spatial scales, so that the localized observations can be effectively assimilated to constrain the mesoscale fields in the LES area of about 15 km in width. The MSDA analysis is used to produce forcing data that drive LES. With such LES workflow we have examined 13 days with shallow convection selected from the period May-August 2016. We will describe the implementation of MSDA, present LES results, and address challenges and opportunities for applying data assimilation to LES studies.

  10. Large-size deployable construction heated by solar irradiation in free space

    Science.gov (United States)

    Pestrenina, Irena; Kondyurin, Alexey; Pestrenin, Valery; Kashin, Nickolay; Naymushin, Alexey

    Large-size deployable construction in free space with subsequent direct curing was invented more than fifteen years ago (Briskman et al., 1997 and Kondyurin, 1998). It caused a lot of scientific problems, one of which is a possibility to use the solar energy for initiation of the curing reaction. This paper is devoted to investigate the curing process under sun irradiation during a space flight in Earth orbits. A rotation of the construction is considered. This motion can provide an optimal temperature distribution in the construction that is required for the polymerization reaction. The cylindrical construction of 80 m length with two hemispherical ends of 10 m radius is considered. The wall of the construction of 10 mm carbon fibers/epoxy matrix composite is irradiated by heat flux from the sun and radiates heat from the external surface by the Stefan- Boltzmann law. A stage of polymerization reaction is calculated as a function of temperature/time based on the laboratory experiments with certified composite materials for space exploitation. The curing kinetics of the composite is calculated for different inclination Low Earth Orbits (300 km altitude) and Geostationary Earth Orbit (40000 km altitude). The results show that • the curing process depends strongly on the Earth orbit and the rotation of the construction; • the optimal flight orbit and rotation can be found to provide the thermal regime that is sufficient for the complete curing of the considered construction. The study is supported by RFBR grant No.12-08-00970-a. 1. Briskman V., A.Kondyurin, K.Kostarev, V.Leontyev, M.Levkovich, A.Mashinsky, G.Nechitailo, T.Yudina, Polymerization in microgravity as a new process in space technology, Paper No IAA-97-IAA.12.1.07, 48th International Astronautical Congress, October 6-10, 1997, Turin Italy. 2. Kondyurin A.V., Building the shells of large space stations by the polymerisation of epoxy composites in open space, Int. Polymer Sci. and Technol., v.25, N4

  11. Large-eddy simulation of highly underexpanded transient gas jets

    NARCIS (Netherlands)

    Vuorinen, V.; Yu, J.; Tirunagari, S.; Kaario, O.; Larmi, M.; Duwig, C.; Boersma, B.J.

    2013-01-01

    Large-eddy simulations (LES) based on scale-selective implicit filtering are carried out in order to study the effect of nozzle pressure ratios on the characteristics of highly underexpanded jets. Pressure ratios ranging from 4.5 to 8.5 with Reynolds numbers of the order 75?000–140?000 are

  12. Can crawl space temperature and moisture conditions be calculated with a whole-building hygrothermal simulation tool?

    DEFF Research Database (Denmark)

    Vanhoutteghem, Lies; Morelli, Martin; Sørensen, Lars Schiøtt

    2017-01-01

    of measurements was compared with simulations of temperature and moisture condition in the floor structure and crawl space. The measurements showed that the extra 50 mm insulation placed below the beams reduced moisture content in the beams below 20 weight% all year. A reasonable agreement between......The hygrothermal behaviour of an outdoor ventilated crawl space with two different designs of the floor structure was investigated. The first design had 250 mm insulation and visible wooden beams towards the crawl space. The second design had 300 mm insulation and no visible wooden beams. One year...... the measurements and simulations was found; however, the evaporation from the soil was a dominant parameter affecting the hygrothermal response in the crawl space and floor structure....

  13. Some thoughts on the management of large, complex international space ventures

    Science.gov (United States)

    Lee, T. J.; Kutzer, Ants; Schneider, W. C.

    1992-01-01

    Management issues relevant to the development and deployment of large international space ventures are discussed with particular attention given to previous experience. Management approaches utilized in the past are labeled as either simple or complex, and signs of efficient management are examined. Simple approaches include those in which experiments and subsystems are developed for integration into spacecraft, and the Apollo-Soyuz Test Project is given as an example of a simple multinational approach. Complex approaches include those for ESA's Spacelab Project and the Space Station Freedom in which functional interfaces cross agency and political boundaries. It is concluded that individual elements of space programs should be managed by individual participating agencies, and overall configuration control is coordinated by level with a program director acting to manage overall objectives and project interfaces.

  14. Simulating the Effect of Space Vehicle Environments on Directional Solidification of a Binary Alloy

    Science.gov (United States)

    Westra, D. G.; Heinrich, J. C.; Poirier, D. R.

    2003-01-01

    Space microgravity missions are designed to provide a microgravity environment for scientific experiments, but these missions cannot provide a perfect environment, due to vibrations caused by crew activity, on-board experiments, support systems (pumps, fans, etc.), periodic orbital maneuvers, and water dumps. Therefore, it is necessary to predict the impact of these vibrations on space experiments, prior to performing them. Simulations were conducted to study the effect of the vibrations on the directional solidification of a dendritic alloy. Finite element ca!cu!attie?ls were dme with a simd2titcr based on a continuum model of dendritic solidification, using the Fractional Step Method (FSM). The FSM splits the solution of the momentum equation into two steps: the viscous intermediate step, which does not enforce continuity; and the inviscid projection step, which calculates the pressure and enforces continuity. The FSM provides significant computational benefits for predicting flows in a directionally solidified alloy, compared to other methods presently employed, because of the efficiency gains in the uncoupled solution of velocity and pressure. finite differences, arises when the interdendritic liquid reaches the eutectic temperature and concentration. When a node reaches eutectic temperature, it is assumed that the solidification of the eutectic liquid continues at constant temperature until all the eutectic is solidified. With this approach, solidification is not achieved continuously across an element; rather, the element is not considered solidified until the eutectic isotherm overtakes the top nodes. For microgravity simulations, where the convection is driven by shrinkage, it introduces large variations in the fluid velocity. When the eutectic isotherm reaches a node, all the eutectic must be solidified in a short period, causing an abrupt increase in velocity. To overcome this difficulty, we employed a scheme to numerically predict a more accurate value

  15. Evaluation of SPACE code for simulation of inadvertent opening of spray valve in Shin Kori unit 1

    International Nuclear Information System (INIS)

    Kim, Seyun; Youn, Bumsoo

    2013-01-01

    SPACE code is expected to be applied to the safety analysis for LOCA (Loss of Coolant Accident) and Non-LOCA scenarios. SPACE code solves two-fluid, three-field governing equations and programmed with C++ computer language using object-oriented concepts. To evaluate the analysis capability for the transient phenomena in the actual nuclear power plant, an inadvertent opening of spray valve in startup test phase of Shin Kori unit 1 was simulated with SPACE code. To evaluate the analysis capability for the transient phenomena in the actual nuclear power plant, an inadvertent opening of spray valve in startup test phase of Shin Kori unit 1 was simulated with SPACE code

  16. Design space development for the extraction process of Danhong injection using a Monte Carlo simulation method.

    Directory of Open Access Journals (Sweden)

    Xingchu Gong

    Full Text Available A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs. Extraction number, extraction time, and the mass ratio of water and material (W/M ratio were selected as critical process parameters (CPPs. Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes.

  17. Large Eddy Simulation of High-Speed, Premixed Ethylene Combustion

    Science.gov (United States)

    Ramesh, Kiran; Edwards, Jack R.; Chelliah, Harsha; Goyne, Christopher; McDaniel, James; Rockwell, Robert; Kirik, Justin; Cutler, Andrew; Danehy, Paul

    2015-01-01

    A large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) methodology is used to simulate premixed ethylene-air combustion in a model scramjet designed for dual mode operation and equipped with a cavity for flameholding. A 22-species reduced mechanism for ethylene-air combustion is employed, and the calculations are performed on a mesh containing 93 million cells. Fuel plumes injected at the isolator entrance are processed by the isolator shock train, yielding a premixed fuel-air mixture at an equivalence ratio of 0.42 at the cavity entrance plane. A premixed flame is anchored within the cavity and propagates toward the opposite wall. Near complete combustion of ethylene is obtained. The combustor is highly dynamic, exhibiting a large-scale oscillation in global heat release and mass flow rate with a period of about 2.8 ms. Maximum heat release occurs when the flame front reaches its most downstream extent, as the flame surface area is larger. Minimum heat release is associated with flame propagation toward the cavity and occurs through a reduction in core flow velocity that is correlated with an upstream movement of the shock train. Reasonable agreement between simulation results and available wall pressure, particle image velocimetry, and OH-PLIF data is obtained, but it is not yet clear whether the system-level oscillations seen in the calculations are actually present in the experiment.

  18. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    Science.gov (United States)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  19. LIFE experiment: isolation of cryptoendolithic organisms from Antarctic colonized sandstone exposed to space and simulated Mars conditions on the international space station.

    Science.gov (United States)

    Scalzi, Giuliano; Selbmann, Laura; Zucconi, Laura; Rabbow, Elke; Horneck, Gerda; Albertano, Patrizia; Onofri, Silvano

    2012-06-01

    Desiccated Antarctic rocks colonized by cryptoendolithic communities were exposed on the International Space Station (ISS) to space and simulated Mars conditions (LiFE-Lichens and Fungi Experiment). After 1.5 years in space samples were retrieved, rehydrated and spread on different culture media. Colonies of a green alga and a pink-coloured fungus developed on Malt-Agar medium; they were isolated from a sample exposed to simulated Mars conditions beneath a 0.1 % T Suprasil neutral density filter and from a sample exposed to space vacuum without solar radiation exposure, respectively. None of the other flight samples showed any growth after incubation. The two organisms able to grow were identified at genus level by Small SubUnit (SSU) and Internal Transcribed Spacer (ITS) rDNA sequencing as Stichococcus sp. (green alga) and Acarospora sp. (lichenized fungal genus) respectively. The data in the present study provide experimental information on the possibility of eukaryotic life transfer from one planet to another by means of rocks and of survival in Mars environment.

  20. Large eddy simulation of a wing-body junction flow

    Science.gov (United States)

    Ryu, Sungmin; Emory, Michael; Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca

    2014-11-01

    We present numerical simulations of the wing-body junction flow experimentally investigated by Devenport & Simpson (1990). Wall-junction flows are common in engineering applications but relevant flow physics close to the corner region is not well understood. Moreover, performance of turbulence models for the body-junction case is not well characterized. Motivated by the insufficient investigations, we have numerically investigated the case with Reynolds-averaged Naiver-Stokes equation (RANS) and Large Eddy Simulation (LES) approaches. The Vreman model applied for the LES and SST k- ω model for the RANS simulation are validated focusing on the ability to predict turbulence statistics near the junction region. Moreover, a sensitivity study of the form of the Vreman model will also be presented. This work is funded under NASA Cooperative Agreement NNX11AI41A (Technical Monitor Dr. Stephen Woodruff)

  1. Simulations of the observation of clouds and aerosols with the Experimental Lidar in Space Equipment system.

    Science.gov (United States)

    Liu, Z; Voelger, P; Sugimoto, N

    2000-06-20

    We carried out a simulation study for the observation of clouds and aerosols with the Japanese Experimental Lidar in Space Equipment (ELISE), which is a two-wavelength backscatter lidar with three detection channels. The National Space Development Agency of Japan plans to launch the ELISE on the Mission Demonstrate Satellite 2 (MDS-2). In the simulations, the lidar return signals for the ELISE are calculated for an artificial, two-dimensional atmospheric model including different types of clouds and aerosols. The signal detection processes are simulated realistically by inclusion of various sources of noise. The lidar signals that are generated are then used as input for simulations of data analysis with inversion algorithms to investigate retrieval of the optical properties of clouds and aerosols. The results demonstrate that the ELISE can provide global data on the structures and optical properties of clouds and aerosols. We also conducted an analysis of the effects of cloud inhomogeneity on retrievals from averaged lidar profiles. We show that the effects are significant for space lidar observations of optically thick broken clouds.

  2. Large Eddy Simulation of the Diurnal Cycle in Southeast Pacific Stratocumulus

    Energy Technology Data Exchange (ETDEWEB)

    Caldwell, P; Bretherton, C

    2008-03-03

    This paper describes a series of 6 day large eddy simulations of a deep, sometimes drizzling stratocumulus-topped boundary layer based on forcings from the East Pacific Investigation of Climate (EPIC) 2001 field campaign. The base simulation was found to reproduce the observed mean boundary layer properties quite well. The diurnal cycle of liquid water path was also well captured, although good agreement appears to result partially from compensating errors in the diurnal cycles of cloud base and cloud top due to overentrainment around midday. At other times of the day, entrainment is found to be proportional to the vertically-integrated buoyancy flux. Model stratification matches observations well; turbulence profiles suggest that the boundary layer is always at least somewhat decoupled. Model drizzle appears to be too sensitive to liquid water path and subcloud evaporation appears to be too weak. Removing the diurnal cycle of subsidence had little effect on simulated cloud albedo. Simulations with changed droplet concentration and drizzle susceptibility showed large liquid water path differences at night, but differences were quite small at midday. Droplet concentration also had a significant impact on entrainment, primarily through droplet sedimentation feedback rather than through drizzle processes.

  3. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    Science.gov (United States)

    Oefelein, Joseph C.

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  4. Average accelerator simulation Truebeam using phase space in IAEA format

    International Nuclear Information System (INIS)

    Santana, Emico Ferreira; Milian, Felix Mas; Paixao, Paulo Oliveira; Costa, Raranna Alves da; Velasco, Fermin Garcia

    2015-01-01

    In this paper is used a computational code of radiation transport simulation based on Monte Carlo technique, in order to model a linear accelerator of treatment by Radiotherapy. This work is the initial step of future proposals which aim to study several treatment of patient by Radiotherapy, employing computational modeling in cooperation with the institutions UESC, IPEN, UFRJ e COI. The Chosen simulation code is GATE/Geant4. The average accelerator is TrueBeam of Varian Company. The geometric modeling was based in technical manuals, and radiation sources on the phase space for photons, provided by manufacturer in the IAEA (International Atomic Energy Agency) format. The simulations were carried out in equal conditions to experimental measurements. Were studied photons beams of 6MV, with 10 per 10 cm of field, focusing on a water phantom. For validation were compared dose curves in depth, lateral profiles in different depths of the simulated results and experimental data. The final modeling of this accelerator will be used in future works involving treatments and real patients. (author)

  5. Space engineering

    Science.gov (United States)

    Alexander, Harold L.

    1991-01-01

    Human productivity was studied for extravehicular tasks performed in microgravity, particularly including in-space assembly of truss structures and other large objects. Human factors research probed the anthropometric constraints imposed on microgravity task performance and the associated workstation design requirements. Anthropometric experiments included reach envelope tests conducted using the 3-D Acoustic Positioning System (3DAPS), which permitted measuring the range of reach possible for persons using foot restraints in neutral buoyancy, both with and without space suits. Much neutral buoyancy research was conducted using the support of water to simulate the weightlessness environment of space. It became clear over time that the anticipated EVA requirement associated with the Space Station and with in-space construction of interplanetary probes would heavily burden astronauts, and remotely operated robots (teleoperators) were increasingly considered to absorb the workload. Experience in human EVA productivity led naturally to teleoperation research into the remote performance of tasks through human controlled robots.

  6. Camera memory study for large space telescope. [charge coupled devices

    Science.gov (United States)

    Hoffman, C. P.; Brewer, J. E.; Brager, E. A.; Farnsworth, D. L.

    1975-01-01

    Specifications were developed for a memory system to be used as the storage media for camera detectors on the large space telescope (LST) satellite. Detectors with limited internal storage time such as intensities charge coupled devices and silicon intensified targets are implied. The general characteristics are reported of different approaches to the memory system with comparisons made within the guidelines set forth for the LST application. Priority ordering of comparisons is on the basis of cost, reliability, power, and physical characteristics. Specific rationales are provided for the rejection of unsuitable memory technologies. A recommended technology was selected and used to establish specifications for a breadboard memory. Procurement scheduling is provided for delivery of system breadboards in 1976, prototypes in 1978, and space qualified units in 1980.

  7. Navigating the Problem Space: The Medium of Simulation Games in the Teaching of History

    Science.gov (United States)

    McCall, Jeremiah

    2012-01-01

    Simulation games can play a critical role in enabling students to navigate the problem spaces of the past while simultaneously critiquing the models designers offer to represent those problem spaces. There is much to be gained through their use. This includes rich opportunities for students to engage the past as independent historians; to consider…

  8. Overview of Small and Large-Scale Space Solar Power Concepts

    Science.gov (United States)

    Potter, Seth; Henley, Mark; Howell, Joe; Carrington, Connie; Fikes, John

    2006-01-01

    An overview of space solar power studies performed at the Boeing Company under contract with NASA will be presented. The major concepts to be presented are: 1. Power Plug in Orbit: this is a spacecraft that collects solar energy and distributes it to users in space using directed radio frequency or optical energy. Our concept uses solar arrays having the same dimensions as ISS arrays, but are assumed to be more efficient. If radiofrequency wavelengths are used, it will necessitate that the receiving satellite be equipped with a rectifying antenna (rectenna). For optical wavelengths, the solar arrays on the receiving satellite will collect the power. 2. Mars Clipper I Power Explorer: this is a solar electric Mars transfer vehicle to support human missions. A near-term precursor could be a high-power radar mapping spacecraft with self-transport capability. Advanced solar electric power systems and electric propulsion technology constitute viable elements for conducting human Mars missions that are roughly comparable in performance to similar missions utilizing alternative high thrust systems, with the one exception being their inability to achieve short Earth-Mars trip times. 3. Alternative Architectures: this task involves investigating alternatives to the traditional solar power satellite (SPS) to supply commercial power from space for use on Earth. Four concepts were studied: two using photovoltaic power generation, and two using solar dynamic power generation, with microwave and laser power transmission alternatives considered for each. All four architectures use geostationary orbit. 4. Cryogenic Propellant Depot in Earth Orbit: this concept uses large solar arrays (producing perhaps 600 kW) to electrolyze water launched from Earth, liquefy the resulting hydrogen and oxygen gases, and store them until needed by spacecraft. 5. Beam-Powered Lunar Polar Rover: a lunar rover powered by a microwave or laser beam can explore permanently shadowed craters near the lunar

  9. CONFRONTING THREE-DIMENSIONAL TIME-DEPENDENT JET SIMULATIONS WITH HUBBLE SPACE TELESCOPE OBSERVATIONS

    International Nuclear Information System (INIS)

    Staff, Jan E.; Niebergal, Brian P.; Ouyed, Rachid; Pudritz, Ralph E.; Cai, Kai

    2010-01-01

    We perform state-of-the-art, three-dimensional, time-dependent simulations of magnetized disk winds, carried out to simulation scales of 60 AU, in order to confront optical Hubble Space Telescope observations of protostellar jets. We 'observe' the optical forbidden line emission produced by shocks within our simulated jets and compare these with actual observations. Our simulations reproduce the rich structure of time-varying jets, including jet rotation far from the source, an inner (up to 400 km s -1 ) and outer (less than 100 km s -1 ) component of the jet, and jet widths of up to 20 AU in agreement with observed jets. These simulations when compared with the data are able to constrain disk wind models. In particular, models featuring a disk magnetic field with a modest radial spatial variation across the disk are favored.

  10. Large eddy simulation and combustion instabilities; Simulation des grandes echelles et instabilites de combustion

    Energy Technology Data Exchange (ETDEWEB)

    Lartigue, G.

    2004-11-15

    The new european laws on pollutants emission impose more and more constraints to motorists. This is particularly true for gas turbines manufacturers, that must design motors operating with very fuel-lean mixtures. Doing so, pollutants formation is significantly reduced but the problem of combustion stability arises. Actually, combustion regimes that have a large excess of air are naturally more sensitive to combustion instabilities. Numerical predictions of these instabilities is thus a key issue for many industrial involved in energy production. This thesis work tries to show that recent numerical tools are now able to predict these combustion instabilities. Particularly, the Large Eddy Simulation method, when implemented in a compressible CFD code, is able to take into account the main processes involved in combustion instabilities, such as acoustics and flame/vortex interaction. This work describes a new formulation of a Large Eddy Simulation numerical code that enables to take into account very precisely thermodynamics and chemistry, that are essential in combustion phenomena. A validation of this work will be presented in a complex geometry (the PRECCINSTA burner). Our numerical results will be successfully compared with experimental data gathered at DLR Stuttgart (Germany). Moreover, a detailed analysis of the acoustics in this configuration will be presented, as well as its interaction with the combustion. For this acoustics analysis, another CERFACS code has been extensively used, the Helmholtz solver AVSP. (author)

  11. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  12. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  13. Planetary Structures And Simulations Of Large-scale Impacts On Mars

    Science.gov (United States)

    Swift, Damian; El-Dasher, B.

    2009-09-01

    The impact of large meteroids is a possible cause for isolated orogeny on bodies devoid of tectonic activity. On Mars, there is a significant, but not perfect, correlation between large, isolated volcanoes and antipodal impact craters. On Mercury and the Moon, brecciated terrain and other unusual surface features can be found at the antipodes of large impact sites. On Earth, there is a moderate correlation between long-lived mantle hotspots at opposite sides of the planet, with meteoroid impact suggested as a possible cause. If induced by impacts, the mechanisms of orogeny and volcanism thus appear to vary between these bodies, presumably because of differences in internal structure. Continuum mechanics (hydrocode) simulations have been used to investigate the response of planetary bodies to impacts, requiring assumptions about the structure of the body: its composition and temperature profile, and the constitutive properties (equation of state, strength, viscosity) of the components. We are able to predict theoretically and test experimentally the constitutive properties of matter under planetary conditions, with reasonable accuracy. To provide a reference series of simulations, we have constructed self-consistent planetary structures using simplified compositions (Fe core and basalt-like mantle), which turn out to agree surprisingly well with the moments of inertia. We have performed simulations of large-scale impacts, studying the transmission of energy to the antipodes. For Mars, significant antipodal heating to depths of a few tens of kilometers was predicted from compression waves transmitted through the mantle. Such heating is a mechanism for volcanism on Mars, possibly in conjunction with crustal cracking induced by surface waves. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  14. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  15. Interactive computer graphics and its role in control system design of large space structures

    Science.gov (United States)

    Reddy, A. S. S. R.

    1985-01-01

    This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.

  16. Heavy-ion collimation at the Large Hadron Collider. Simulations and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hermes, Pascal Dominik

    2016-12-19

    The CERN Large Hadron Collider (LHC) stores and collides proton and {sup 208}Pb{sup 82+} beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with the collimators. Previous simulation tools used simplified models for the simulation of particle-matter interaction and showed discrepancies compared to the measured loss patterns. This thesis describes the development and application of improved heavy-ion collimation simulation tools. Two different approaches are presented to provide these functionalities. In the first presented tool, called STIER, fragmentation at the primary collimator is simulated with the Monte-Carlo event generator FLUKA. The ion fragments scattered out of the primary collimator are subsequently tracked as protons with ion-equivalent rigidities in the existing proton tracking tool SixTrack. This approach was used to prepare the collimator settings for the 2015 LHC heavy-ion run and its predictions allowed reducing undesired losses. More accurate simulation results are obtained with the second presented simulation tool, in which SixTrack is extended to track arbitrary heavy ions. This new

  17. Heavy-ion collimation at the Large Hadron Collider. Simulations and measurements

    International Nuclear Information System (INIS)

    Hermes, Pascal Dominik

    2016-01-01

    The CERN Large Hadron Collider (LHC) stores and collides proton and 208 Pb 82+ beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with the collimators. Previous simulation tools used simplified models for the simulation of particle-matter interaction and showed discrepancies compared to the measured loss patterns. This thesis describes the development and application of improved heavy-ion collimation simulation tools. Two different approaches are presented to provide these functionalities. In the first presented tool, called STIER, fragmentation at the primary collimator is simulated with the Monte-Carlo event generator FLUKA. The ion fragments scattered out of the primary collimator are subsequently tracked as protons with ion-equivalent rigidities in the existing proton tracking tool SixTrack. This approach was used to prepare the collimator settings for the 2015 LHC heavy-ion run and its predictions allowed reducing undesired losses. More accurate simulation results are obtained with the second presented simulation tool, in which SixTrack is extended to track arbitrary heavy ions. This new tracking

  18. Private ground infrastructures for space exploration missions simulations

    Science.gov (United States)

    Souchier, Alain

    2010-06-01

    The Mars Society, a private non profit organisation devoted to promote the red planet exploration, decided to implement simulated Mars habitat in two locations on Earth: in northern Canada on the rim of a meteoritic crater (2000), in a US Utah desert, location of a past Jurassic sea (2001). These habitats have been built with large similarities to actual planned habitats for first Mars exploration missions. Participation is open to everybody either proposing experimentations or wishing only to participate as a crew member. Participants are from different organizations: Mars Society, Universities, experimenters working with NASA or ESA. The general philosophy of the work conducted is not to do an innovative scientific work on the field but to learn how the scientific work is affected or modified by the simulation conditions. Outside activities are conducted with simulated spacesuits limiting the experimenter abilities. Technology or procedures experimentations are also conducted as well as experimentations on the crew psychology and behaviour.

  19. Coupled large-eddy simulation and morphodynamics of a large-scale river under extreme flood conditions

    Science.gov (United States)

    Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team

    2016-11-01

    We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.

  20. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n

    2010-01-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one

  1. Large eddy simulation of particulate flow inside a differentially heated cavity

    Energy Technology Data Exchange (ETDEWEB)

    Bosshard, Christoph, E-mail: christoph.bosshard@a3.epfl.ch [Paul Scherrer Institut, Laboratory for Thermalhydraulics (LTH), 5232 Villigen PSI (Switzerland); Dehbi, Abdelouahab, E-mail: abdel.dehbi@psi.ch [Paul Scherrer Institut, Laboratory for Thermalhydraulics (LTH), 5232 Villigen PSI (Switzerland); Deville, Michel, E-mail: michel.deville@epfl.ch [École Polytechnique Fédérale de Lausanne, STI-DO, Station 12, 1015 Lausanne (Switzerland); Leriche, Emmanuel, E-mail: emmanuel.leriche@univ-lille1.fr [Université de Lille I, Laboratoire de Mécanique de Lille, Avenue Paul Langevin, Cité Scientifique, F-59655 Villeneuve d’Ascq Cédex (France); Soldati, Alfredo, E-mail: soldati@uniud.it [Dipartimento di Energetica e Macchine and Centro Interdipartimentale di Fluidodinamica e Idraulica, Universitá degli Studi di Udine, Udine (Italy)

    2014-02-15

    Highlights: • Nuclear accident leads to airborne radioactive particles in containment atmosphere. • Large eddy simulation with particles in differentially heated cavity is carried out. • LES results show negligible differences with direct numerical simulation. • Four different particle sets with diameters from 10 μm to 35 μm are tracked. • Particle removal dominated by gravity settling and turbophoresis is negligible. - Abstract: In nuclear safety, some severe accident scenarios lead to the presence of fission products in aerosol form in the closed containment atmosphere. It is important to understand the particle depletion process to estimate the risk of a release of radioactivity to the environment should a containment break occur. As a model for the containment, we use the three-dimensional differentially heated cavity problem. The differentially heated cavity is a cubical box with a hot wall and a cold wall on vertical opposite sides. On the other walls of the cube we have adiabatic boundary conditions. For the velocity field the no-slip boundary condition is applied. The flow of the air in the cavity is described by the Boussinesq equations. The method used to simulate the turbulent flow is the large eddy simulation (LES) where the dynamics of the large eddies is resolved by the computational grid and the small eddies are modelled by the introduction of subgrid scale quantities using a filter function. Particle trajectories are computed using the Lagrangian particle tracking method, including the relevant forces (drag, gravity, thermophoresis). Four different sets with each set containing one million particles and diameters of 10 μm, 15 μm, 25 μm and 35 μm are simulated. Simulation results for the flow field and particle sizes from 15 μm to 35 μm are compared to previous results from direct numerical simulation (DNS). The integration time of the LES is three times longer and the smallest particles have been simulated only in the LES. Particle

  2. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  3. Two-dimensional simulation of the gravitational system dynamics and formation of the large-scale structure of the universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.; Kotok, E.V.; Novikov, I.D.; Polyudov, A.N.; Shandarin, S.F.; Sigov, Y.S.

    1980-01-01

    The results of a numerical experiment are given that describe the non-linear stages of the development of perturbations in gravitating matter density in the expanding Universe. This process simulates the formation of the large-scale structure of the Universe from an initially almost homogeneous medium. In the one- and two-dimensional cases of this numerical experiment the evolution of the system from 4096 point masses that interact gravitationally only was studied with periodic boundary conditions (simulation of the infinite space). The initial conditions were chosen that resulted from the theory of the evolution of small perturbations in the expanding Universe. The results of numerical experiments are systematically compared with the approximate analytic theory. The results of the calculations show that in the case of collisionless particles, as well as in the gas-dynamic case, the cellular structure appeared at the non-linear stage in the case of the adiabatic perturbations. The greater part of the matter is in thin layers that separate vast regions of low density. In a Robertson-Walker universe the cellular structure exists for a finite time and then fragments into a few compact objects. In the open Universe the cellular structure also exists if the amplitude of initial perturbations is large enough. But the following disruption of the cellular structure is more difficult because of too rapid an expansion of the Universe. The large-scale structure is frozen. (author)

  4. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  5. Autonomic, Agent-Based Simulation Management (A2SM) Framework, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Large scale numerical simulations, as typified by climate models, space weather models, and the like, typically involve non-linear governing equations in discretized...

  6. Photoluminescence in large fluence radiation irradiated space silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Hisamatsu, Tadashi; Kawasaki, Osamu; Matsuda, Sumio [National Space Development Agency of Japan, Tsukuba, Ibaraki (Japan). Tsukuba Space Center; Tsukamoto, Kazuyoshi

    1997-03-01

    Photoluminescence spectroscopy measurements were carried out for silicon 50{mu}m BSFR space solar cells irradiated with 1MeV electrons with a fluence exceeding 1 x 10{sup 16} e/cm{sup 2} and 10MeV protons with a fluence exceeding 1 x 10{sup 13} p/cm{sup 2}. The results were compared with the previous result performed in a relative low fluence region, and the radiation-induced defects which cause anomalous degradation of the cell performance in such large fluence regions were discussed. As far as we know, this is the first report which presents the PL measurement results at 4.2K of the large fluence radiation irradiated silicon solar cells. (author)

  7. Evaluation of Presumed Probability-Density-Function Models in Non-Premixed Flames by using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Cao Hong-Jun; Zhang Hui-Qiang; Lin Wen-Yi

    2012-01-01

    Four kinds of presumed probability-density-function (PDF) models for non-premixed turbulent combustion are evaluated in flames with various stoichiometric mixture fractions by using large eddy simulation (LES). The LES code is validated by the experimental data of a classical turbulent jet flame (Sandia flame D). The mean and rms temperatures obtained by the presumed PDF models are compared with the LES results. The β-function model achieves a good prediction for different flames. The predicted rms temperature by using the double-δ function model is very small and unphysical in the vicinity of the maximum mean temperature. The clip-Gaussian model and the multi-δ function model make a worse prediction of the extremely fuel-rich or fuel-lean side due to the clip at the boundary of the mixture fraction space. The results also show that the overall prediction performance of presumed PDF models is better at mediate stoichiometric mixture fractions than that at very small or very large ones. (fundamental areas of phenomenology(including applications))

  8. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  9. Large-scale micromagnetics simulations with dipolar interaction using all-to-all communications

    Directory of Open Access Journals (Sweden)

    Hiroshi Tsukahara

    2016-05-01

    Full Text Available We implement on our micromagnetics simulator low-complexity parallel fast-Fourier-transform algorithms, which reduces the frequency of all-to-all communications from six to two times. Almost all the computation time of micromagnetics simulation is taken up by the calculation of the magnetostatic field which can be calculated using the fast Fourier transform method. The results show that the simulation time is decreased with good scalability, even if the micromagentics simulation is performed using 8192 physical cores. This high parallelization effect enables large-scale micromagentics simulation using over one billion to be performed. Because massively parallel computing is needed to simulate the magnetization dynamics of real permanent magnets composed of many micron-sized grains, it is expected that our simulator reveals how magnetization dynamics influences the coercivity of the permanent magnet.

  10. Simulation of space protons influence on silicon semiconductor devices using gamma-neutron irradiation

    International Nuclear Information System (INIS)

    Zhukov, Y.N.; Zinchenko, V.F.; Ulimov, V.N.

    1999-01-01

    In this study the authors focus on the problems of simulating the space proton energy spectra under laboratory gamma-neutron radiation tests of semiconductor devices (SD). A correct simulation of radiation effects implies to take into account and evaluate substantial differences in the processes of formation of primary defects in SD in space environment and under laboratory testing. These differences concern: 1) displacement defects, 2) ionization defects and 3) intensity of radiation. The study shows that: - the energy dependence of nonionizing energy loss (NIEL) is quite universal to predict the degradation of SD parameters associated to displacement defects, and - MOS devices that are sensitive to ionization defects indicated the same variation of parameters under conditions of equality of ionization density generated by protons and gamma radiations. (A.C.)

  11. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  12. Use of Parallel Micro-Platform for the Simulation the Space Exploration

    Science.gov (United States)

    Velasco Herrera, Victor Manuel; Velasco Herrera, Graciela; Rosano, Felipe Lara; Rodriguez Lozano, Salvador; Lucero Roldan Serrato, Karen

    The purpose of this work is to create a parallel micro-platform, that simulates the virtual movements of a space exploration in 3D. One of the innovations presented in this design consists of the application of a lever mechanism for the transmission of the movement. The development of such a robot is a challenging task very different of the industrial manipulators due to a totally different target system of requirements. This work presents the study and simulation, aided by computer, of the movement of this parallel manipulator. The development of this model has been developed using the platform of computer aided design Unigraphics, in which it was done the geometric modeled of each one of the components and end assembly (CAD), the generation of files for the computer aided manufacture (CAM) of each one of the pieces and the kinematics simulation of the system evaluating different driving schemes. We used the toolbox (MATLAB) of aerospace and create an adaptive control module to simulate the system.

  13. Hybrid Electrostatic/Flextensional Deformable Membrane Mirror for Lightweight, Large Aperture and Cryogenic Space Telescopes, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — TRS Technologies proposes innovative hybrid electrostatic/flextensional membrane deformable mirror capable of large amplitude aberration correction for large...

  14. A laser particulate spectrometer for a space simulation facility

    Science.gov (United States)

    Schmitt, R. J.; Boyd, B. A.; Linford, R. M. F.; Richmond, R. G.

    1975-01-01

    A laser particulate spectrometer (LPS) system was developed to measure the size and speed distributions of particulate contaminants. Detection of the particulates is achieved by means of light scattering and extinction effects using a single laser beam to cover a size range of 0.8 to 275 microns diameter and a speed range of 0.2 to 20 meters/second. The LPS system was designed to operate in the high-vacuum environment of a space simulation chamber with cold shroud temperatures ranging from 77 to 300 K.

  15. Very large virtual compound spaces: construction, storage and utility in drug discovery.

    Science.gov (United States)

    Peng, Zhengwei

    2013-09-01

    Recent activities in the construction, storage and exploration of very large virtual compound spaces are reviewed by this report. As expected, the systematic exploration of compound spaces at the highest resolution (individual atoms and bonds) is intrinsically intractable. By contrast, by staying within a finite number of reactions and a finite number of reactants or fragments, several virtual compound spaces have been constructed in a combinatorial fashion with sizes ranging from 10(11)11 to 10(20)20 compounds. Multiple search methods have been developed to perform searches (e.g. similarity, exact and substructure) into those compound spaces without the need for full enumeration. The up-front investment spent on synthetic feasibility during the construction of some of those virtual compound spaces enables a wider adoption by medicinal chemists to design and synthesize important compounds for drug discovery. Recent activities in the area of exploring virtual compound spaces via the evolutionary approach based on Genetic Algorithm also suggests a positive shift of focus from method development to workflow, integration and ease of use, all of which are required for this approach to be widely adopted by medicinal chemists.

  16. Software for Engineering Simulations of a Spacecraft

    Science.gov (United States)

    Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis

    2005-01-01

    Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software.

  17. Numerical techniques for large cosmological N-body simulations

    International Nuclear Information System (INIS)

    Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.

    1985-01-01

    We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important

  18. Large Scale Monte Carlo Simulation of Neutrino Interactions Using the Open Science Grid and Commercial Clouds

    International Nuclear Information System (INIS)

    Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.

    2015-01-01

    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)

  19. Large Eddy Simulation of Sydney Swirl Non-Reaction Jets

    DEFF Research Database (Denmark)

    Yang, Yang; Kær, Søren Knudsen; Yin, Chungen

    The Sydney swirl burner non-reaction case was studied using large eddy simulation. The two-point correlation method was introduced and used to estimate grid resolution. Energy spectra and instantaneous pressure and velocity plots were used to identify features in flow field. By using these method......, vortex breakdown and precessing vortex core are identified and different flow zones are shown....

  20. Energy content of stormtime ring current from phase space mapping simulations

    International Nuclear Information System (INIS)

    Chen, M.W.; Schulz, M.; Lyons, L.R.

    1993-01-01

    The authors perform a model study to account for the increase in energy content of the trapped-particle population which occurs during the main phase of major geomagnetic storms. They consider stormtime particle transport in the equatorial region of the magnetosphere. They start with a phase space distribution of the ring current before the storm, created by a steady state transport model. They then use a previously developed guiding center particle simulation to map the stormtime ring current phase space, following Liouville's theorem. This model is able to account for the ten to twenty fold increase in energy content of magnetospheric ions during the storm

  1. Airline Operations Center Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The NASA Airspace Systems Program (ASP) uses a large suite of models, simulations, and laboratories to develop and assess new ATM concepts and technologies. Most of...

  2. Homogeneous SPC/E water nucleation in large molecular dynamics simulations.

    Science.gov (United States)

    Angélil, Raymond; Diemand, Jürg; Tanaka, Kyoko K; Tanaka, Hidekazu

    2015-08-14

    We perform direct large molecular dynamics simulations of homogeneous SPC/E water nucleation, using up to ∼ 4 ⋅ 10(6) molecules. Our large system sizes allow us to measure extremely low and accurate nucleation rates, down to ∼ 10(19) cm(-3) s(-1), helping close the gap between experimentally measured rates ∼ 10(17) cm(-3) s(-1). We are also able to precisely measure size distributions, sticking efficiencies, cluster temperatures, and cluster internal densities. We introduce a new functional form to implement the Yasuoka-Matsumoto nucleation rate measurement technique (threshold method). Comparison to nucleation models shows that classical nucleation theory over-estimates nucleation rates by a few orders of magnitude. The semi-phenomenological nucleation model does better, under-predicting rates by at worst a factor of 24. Unlike what has been observed in Lennard-Jones simulations, post-critical clusters have temperatures consistent with the run average temperature. Also, we observe that post-critical clusters have densities very slightly higher, ∼ 5%, than bulk liquid. We re-calibrate a Hale-type J vs. S scaling relation using both experimental and simulation data, finding remarkable consistency in over 30 orders of magnitude in the nucleation rate range and 180 K in the temperature range.

  3. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  4. Role of jet spacing and strut geometry on the formation of large scale structures and mixing characteristics

    Science.gov (United States)

    Soni, Rahul Kumar; De, Ashoke

    2018-05-01

    The present study primarily focuses on the effect of the jet spacing and strut geometry on the evolution and structure of the large-scale vortices which play a key role in mixing characteristics in turbulent supersonic flows. Numerically simulated results corresponding to varying parameters such as strut geometry and jet spacing (Xn = nDj such that n = 2, 3, and 5) for a square jet of height Dj = 0.6 mm are presented in the current study, while the work also investigates the presence of the local quasi-two-dimensionality for the X2(2Dj) jet spacing; however, the same is not true for higher jet spacing. Further, the tapered strut (TS) section is modified into the straight strut (SS) for investigation, where the remarkable difference in flow physics is unfolded between the two configurations for similar jet spacing (X2: 2Dj). The instantaneous density and vorticity contours reveal the structures of varying scales undergoing different evolution for the different configurations. The effect of local spanwise rollers is clearly manifested in the mixing efficiency and the jet spreading rate. The SS configuration exhibits excellent near field mixing behavior amongst all the arrangements. However, in the case of TS cases, only the X2(2Dj) configuration performs better due to the presence of local spanwise rollers. The qualitative and quantitative analysis reveals that near-field mixing is strongly affected by the two-dimensional rollers, while the early onset of the wake mode is another crucial parameter to have improved mixing. Modal decomposition performed for the SS arrangement sheds light onto the spatial and temporal coherence of the structures, where the most dominant structures are found to be the von Kármán street vortices in the wake region.

  5. Extended phase-space methods for enhanced sampling in molecular simulations: a review

    Directory of Open Access Journals (Sweden)

    Hiroshi eFujisaki

    2015-09-01

    Full Text Available Molecular Dynamics simulations are a powerful approach to study biomolecular conformational changes or protein-ligand, protein-protein and protein-DNA/RNA interactions. Straightforward applications however are often hampered by incomplete sampling, since in a typical simulated trajectory the system will spend most of its time trapped by high energy barriers in restricted regions of the configuration space. Over the years, several techniques have been designed to overcome this problem and enhance space sampling. Here, we review a class of methods that rely on the idea of extending the set of dynamical variables of the system by adding extra ones associated to functions describing the process under study. In particular, we illustrate the Temperature Accelerated Molecular Dynamics (TAMD, Logarithmic Mean Force Dynamics (LogMFD, andMultiscale Enhanced Sampling (MSES algorithms. We also discuss combinations with techniques for searching reaction paths. We show the advantages presented by this approach and how it allows to quickly sample important regions of the free energy landscape via automatic exploration.

  6. Simulations of muon-induced neutron flux at large depths underground

    International Nuclear Information System (INIS)

    Kudryavtsev, V.A.; Spooner, N.J.C.; McMillan, J.E.

    2003-01-01

    The production of neutrons by cosmic-ray muons at large depths underground is discussed. The most recent versions of the muon propagation code MUSIC, and particle transport code FLUKA are used to evaluate muon and neutron fluxes. The results of simulations are compared with experimental data

  7. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  8. Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes

    Science.gov (United States)

    Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.

    2015-01-01

    Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the

  9. An Engineering Design Reference Mission for a Future Large-Aperture UVOIR Space Observatory

    Science.gov (United States)

    Thronson, Harley A.; Bolcar, Matthew R.; Clampin, Mark; Crooke, Julie A.; Redding, David; Rioux, Norman; Stahl, H. Philip

    2016-01-01

    From the 2010 NRC Decadal Survey and the NASA Thirty-Year Roadmap, Enduring Quests, Daring Visions, to the recent AURA report, From Cosmic Birth to Living Earths, multiple community assessments have recommended development of a large-aperture UVOIR space observatory capable of achieving a broad range of compelling scientific goals. Of these priority science goals, the most technically challenging is the search for spectroscopic biomarkers in the atmospheres of exoplanets in the solar neighborhood. Here we present an engineering design reference mission (EDRM) for the Advanced Technology Large-Aperture Space Telescope (ATLAST), which was conceived from the start as capable of breakthrough science paired with an emphasis on cost control and cost effectiveness. An EDRM allows the engineering design trade space to be explored in depth to determine what are the most demanding requirements and where there are opportunities for margin against requirements. Our joint NASA GSFC/JPL/MSFC/STScI study team has used community-provided science goals to derive mission needs, requirements, and candidate mission architectures for a future large-aperture, non-cryogenic UVOIR space observatory. The ATLAST observatory is designed to operate at a Sun-Earth L2 orbit, which provides a stable thermal environment and excellent field of regard. Our reference designs have emphasized a serviceable 36-segment 9.2 m aperture telescope that stows within a five-meter diameter launch vehicle fairing. As part of our cost-management effort, this particular reference mission builds upon the engineering design for JWST. Moreover, it is scalable to a variety of launch vehicle fairings. Performance needs developed under the study are traceable to a variety of additional reference designs, including options for a monolithic primary mirror.

  10. Preliminary results on the dynamics of large and flexible space structures in Halo orbits

    Science.gov (United States)

    Colagrossi, Andrea; Lavagna, Michèle

    2017-05-01

    The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around

  11. Simulation Evaluation of Controller-Managed Spacing Tools under Realistic Operational Conditions

    Science.gov (United States)

    Callantine, Todd J.; Hunt, Sarah M.; Prevot, Thomas

    2014-01-01

    Controller-Managed Spacing (CMS) tools have been developed to aid air traffic controllers in managing high volumes of arriving aircraft according to a schedule while enabling them to fly efficient descent profiles. The CMS tools are undergoing refinement in preparation for field demonstration as part of NASA's Air Traffic Management (ATM) Technology Demonstration-1 (ATD-1). System-level ATD-1 simulations have been conducted to quantify expected efficiency and capacity gains under realistic operational conditions. This paper presents simulation results with a focus on CMS-tool human factors. The results suggest experienced controllers new to the tools find them acceptable and can use them effectively in ATD-1 operations.

  12. Thermal large Eddy simulations and experiments in the framework of non-isothermal blowing

    International Nuclear Information System (INIS)

    Brillant, G.

    2004-06-01

    The aim of this work is to study thermal large-eddy simulations and to determine the nonisothermal blowing impact on a turbulent boundary layer. An experimental study is also carried out in order to complete and validate simulation results. In a first time, we developed a turbulent inlet condition for the velocity and the temperature, which is necessary for the blowing simulations.We studied the asymptotic behavior of the velocity, the temperature and the thermal turbulent fluxes in a large-eddy simulation point of view. We then considered dynamics models for the eddy-diffusivity and we simulated a turbulent channel flow with imposed temperature, imposed flux and adiabatic walls. The numerical and experimental study of blowing permitted to obtain to the modifications of a thermal turbulent boundary layer with the blowing rate. We observed the consequences of the blowing on mean and rms profiles of velocity and temperature but also on velocity-velocity and velocity-temperature correlations. Moreover, we noticed an increase of the turbulent structures in the boundary layer with blowing. (author)

  13. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  14. Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations

    Science.gov (United States)

    Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang

    2017-09-01

    Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.

  15. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  16. Research on the method of measuring space information network capacity in communication service

    Directory of Open Access Journals (Sweden)

    Zhu Shichao

    2017-02-01

    Full Text Available Because of the large scale characteristic of space information network in terms of space and time and the increasing of its complexity,existing measuring methods of information transmission capacity have been unable to measure the existing and future space information networkeffectively.In this study,we firstly established a complex model of space information network,and measured the whole space information network capacity by means of analyzing data access capability to the network and data transmission capability within the network.At last,we verified the rationality of the proposed measuring method by using STK and Matlab simulation software for collaborative simulation.

  17. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  18. Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition

    International Nuclear Information System (INIS)

    Farnudi, Bahman; Vvedensky, Dimitri D

    2011-01-01

    Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.

  19. Large-eddy simulation of mesoscale dynamics and entrainment around a pocket of open cells observed in VOCALS-REx RF06

    Directory of Open Access Journals (Sweden)

    A. H. Berner

    2011-10-01

    Full Text Available Large-eddy simulations of a pocket of open cells (POC based on VOCALS Regional Experiment (REx NSF C-130 Research Flight 06 are analyzed and compared with aircraft observations. A doubly-periodic domain 192 km × 24 km with 125 m horizontal and 5 m vertical grid spacing near the capping inversion is used. The POC is realized in the model as a fixed 96 km wide region of reduced cloud droplet number concentration (Nc based on observed values; initialization and forcing are otherwise uniform across the domain. The model reproduces aircraft-observed differences in boundary-layer structure and precipitation organization between a well-mixed overcast region and a decoupled POC with open-cell precipitating cumuli, although the simulated cloud cover is too large in the POC. A sensitivity study in which Nc is allowed to advect following the turbulent flow gives nearly identical results over the 16 h length of the simulation (which starts at night and goes into the next afternoon.

    The simulated entrainment rate is nearly a factor of two smaller in the less turbulent POC than in the more turbulent overcast region. However, the inversion rises at a nearly uniform rate across the domain because powerful buoyancy restoring forces counteract horizontal inversion height gradients. A secondary circulation develops in the model that diverts subsiding free-tropospheric air away from the POC into the surrounding overcast region, counterbalancing the weaker entrainment in the POC with locally weaker subsidence.

  20. Large eddy simulation of a fuel rod subchannel

    International Nuclear Information System (INIS)

    Mayer, Gusztav

    2007-01-01

    In a VVER-440 reactor the measured outlet temperature is related to fuel limit parameters and the power upgrading plans of VVER-440 reactors motivated us to obtain more information on the mixing process of the fuel assemblies. In a VVER-440 rod bundle the fuel rods are arranged in triangular array. Measurement shows (Krauss and Meyer, 1998) that the classical engineering approach, which tries to trace the characterization of such systems back to equivalent (hydraulic diameter) pipe flows, does not give reasonable results. Due to the different turbulence characteristics, the mixing is more intensive in rod bundles than it would be expected based on equivalent pipe flow correlations. As a possible explanation of the high mixing, secondary flow was deduced from measurements by several experimentalists (Trupp and Azad, 1975). Another candidate to explain the high mixing is the so-called flow pulsation phenomenon (Krauss and Meyer, 1998). In this paper we present subchannel simulations (Mayer et al. 2007) using large eddy simulation (LES) methodology and the lattice Boltzmann method (LBM) without the spacers at Reynolds number 21000. The simulation results are compared with the measurements of Trupp and Azad (1975). The mean axial velocity profile shows good agreement with the measurement data. Secondary flow has been observed directly in the simulation results. Reasonable agreement has been achieved for most Reynolds stresses. Nevertheless, the calculated normal stresses show small, but systematic deviation from the measurement data. (author)

  1. Quasi-equilibria in reduced Liouville spaces.

    Science.gov (United States)

    Halse, Meghan E; Dumez, Jean-Nicolas; Emsley, Lyndon

    2012-06-14

    The quasi-equilibrium behaviour of isolated nuclear spin systems in full and reduced Liouville spaces is discussed. We focus in particular on the reduced Liouville spaces used in the low-order correlations in Liouville space (LCL) simulation method, a restricted-spin-space approach to efficiently modelling the dynamics of large networks of strongly coupled spins. General numerical methods for the calculation of quasi-equilibrium expectation values of observables in Liouville space are presented. In particular, we treat the cases of a time-independent Hamiltonian, a time-periodic Hamiltonian (with and without stroboscopic sampling) and powder averaging. These quasi-equilibrium calculation methods are applied to the example case of spin diffusion in solid-state nuclear magnetic resonance. We show that there are marked differences between the quasi-equilibrium behaviour of spin systems in the full and reduced spaces. These differences are particularly interesting in the time-periodic-Hamiltonian case, where simulations carried out in the reduced space demonstrate ergodic behaviour even for small spins systems (as few as five homonuclei). The implications of this ergodic property on the success of the LCL method in modelling the dynamics of spin diffusion in magic-angle spinning experiments of powders is discussed.

  2. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.

    2017-01-05

    We describe a framework for the large-eddy simulation of solid particles suspended and transported within an incompressible turbulent boundary layer (TBL). For the fluid phase, the large-eddy simulation (LES) of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed velocity field. Furthermore, a recycling method to generate turbulent inflow is implemented. For the particle phase, the direct quadrature method of moments (DQMOM) is chosen in which the weights and abscissas of the quadrature approximation are tracked directly rather than the moments themselves. The numerical method in this framework is based on a fractional-step method with an energy-conservative fourth-order finite difference scheme on a staggered mesh. This code is parallelized based on standard message passing interface (MPI) protocol and is designed for distributed-memory machines. It is proposed to utilize this framework to examine transport of particles in very large-scale simulations. The solver is validated using the well know result of Taylor-Green vortex case. A large-scale sandstorm case is simulated and the altitude variations of number density along with its fluctuations are quantified.

  3. Cosmological observations with a wide field telescope in space: Pixel simulations of EUCLID spectrometer

    International Nuclear Information System (INIS)

    Zoubian, Julien

    2012-01-01

    The observations of the supernovae, the cosmic microwave background, and more recently the measurement of baryon acoustic oscillations and the weak lensing effects, converge to a Lambda CDM model, with an accelerating expansion of the today Universe. This model need two dark components to fit the observations, the dark matter and the dark energy. Two approaches seem particularly promising to measure both geometry of the Universe and growth of dark matter structures, the analysis of the weak distortions of distant galaxies by gravitational lensing and the study of the baryon acoustic oscillations. Both methods required a very large sky surveys of several thousand square degrees. In the context of the spectroscopic survey of the space mission EUCLID, dedicated to the study of the dark side of the universe, I developed a pixel simulation tool for analyzing instrumental performances. The proposed method can be summarized in three steps. The first step is to simulate the observables, i.e. mainly the sources of the sky. I work up a new method, adapted for spectroscopic simulations, which allows to mock an existing survey of galaxies in ensuring that the distribution of the spectral properties of galaxies are representative of current observations, in particular the distribution of the emission lines. The second step is to simulate the instrument and produce images which are equivalent to the expected real images. Based on the pixel simulator of the HST, I developed a new tool to compute the images of the spectroscopic channel of EUCLID. The new simulator have the particularity to be able to simulate PSF with various energy distributions and detectors which have different pixels. The last step is the estimation of the performances of the instrument. Based on existing tools, I set up a pipeline of image processing and performances measurement. My main results were: 1) to validate the method by simulating an existing survey of galaxies, the WISP survey, 2) to determine the

  4. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  5. Generation of initial kinetic distributions for simulation of long-pulse charged particle beams with high space-charge intensity

    Directory of Open Access Journals (Sweden)

    Steven M. Lund

    2009-11-01

    Full Text Available Self-consistent Vlasov-Poisson simulations of beams with high space-charge intensity often require specification of initial phase-space distributions that reflect properties of a beam that is well adapted to the transport channel—both in terms of low-order rms (envelope properties as well as the higher-order phase-space structure. Here, we first review broad classes of kinetic distributions commonly in use as initial Vlasov distributions in simulations of unbunched or weakly bunched beams with intense space-charge fields including the following: the Kapchinskij-Vladimirskij (KV equilibrium, continuous-focusing equilibria with specific detailed examples, and various nonequilibrium distributions, such as the semi-Gaussian distribution and distributions formed from specified functions of linear-field Courant-Snyder invariants. Important practical details necessary to specify these distributions in terms of standard accelerator inputs are presented in a unified format. Building on this presentation, a new class of approximate initial kinetic distributions are constructed using transformations that preserve linear focusing, single-particle Courant-Snyder invariants to map initial continuous-focusing equilibrium distributions to a form more appropriate for noncontinuous focusing channels. Self-consistent particle-in-cell simulations are employed to show that the approximate initial distributions generated in this manner are better adapted to the focusing channels for beams with high space-charge intensity. This improved capability enables simulations that more precisely probe intrinsic stability properties and machine performance.

  6. Real-time graphics for the Space Station Freedom cupola, developed in the Systems Engineering Simulator

    Science.gov (United States)

    Red, Michael T.; Hess, Philip W.

    1989-01-01

    Among the Lyndon B. Johnson Space Center's responsibilities for Space Station Freedom is the cupola. Attached to the resource node, the cupola is a windowed structure that will serve as the space station's secondary control center. From the cupola, operations involving the mobile service center and orbital maneuvering vehicle will be conducted. The Systems Engineering Simulator (SES), located in building 16, activated a real-time man-in-the-loop cupola simulator in November 1987. The SES cupola is an engineering tool with the flexibility to evolve in both hardware and software as the final cupola design matures. Two workstations are simulated with closed-circuit television monitors, rotational and translational hand controllers, programmable display pushbuttons, and graphics display with trackball and keyboard. The displays and controls of the SES cupola are driven by a Silicon Graphics Integrated Raster Imaging System (IRIS) 4D/70 GT computer. Through the use of an interactive display builder program, SES, cupola display pages consisting of two dimensional and three dimensional graphics are constructed. These display pages interact with the SES via the IRIS real-time graphics interface. The focus is on the real-time graphics interface applications software developed on the IRIS.

  7. An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base

    Science.gov (United States)

    Ragusa, J. M.

    1973-01-01

    The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.

  8. Comparative Performance in Single-Port Versus Multiport Minimally Invasive Surgery, and Small Versus Large Operative Working Spaces: A Preclinical Randomized Crossover Trial.

    Science.gov (United States)

    Marcus, Hani J; Seneci, Carlo A; Hughes-Hallett, Archie; Cundy, Thomas P; Nandi, Dipankar; Yang, Guang-Zhong; Darzi, Ara

    2016-04-01

    Surgical approaches such as transanal endoscopic microsurgery, which utilize small operative working spaces, and are necessarily single-port, are particularly demanding with standard instruments and have not been widely adopted. The aim of this study was to compare simultaneously surgical performance in single-port versus multiport approaches, and small versus large working spaces. Ten novice, 4 intermediate, and 1 expert surgeons were recruited from a university hospital. A preclinical randomized crossover study design was implemented, comparing performance under the following conditions: (1) multiport approach and large working space, (2) multiport approach and intermediate working space, (3) single-port approach and large working space, (4) single-port approach and intermediate working space, and (5) single-port approach and small working space. In each case, participants performed a peg transfer and pattern cutting tasks, and each task repetition was scored. Intermediate and expert surgeons performed significantly better than novices in all conditions (P Performance in single-port surgery was significantly worse than multiport surgery (P performance in the intermediate versus large working space. In single-port surgery, there was a converse trend; performances in the intermediate and small working spaces were significantly better than in the large working space. Single-port approaches were significantly more technically challenging than multiport approaches, possibly reflecting loss of instrument triangulation. Surprisingly, in single-port approaches, in which triangulation was no longer a factor, performance in large working spaces was worse than in intermediate and small working spaces. © The Author(s) 2015.

  9. Large eddy simulation of cavitating flows

    Science.gov (United States)

    Gnanaskandan, Aswin; Mahesh, Krishnan

    2014-11-01

    Large eddy simulation on unstructured grids is used to study hydrodynamic cavitation. The multiphase medium is represented using a homogeneous equilibrium model that assumes thermal equilibrium between the liquid and the vapor phase. Surface tension effects are ignored and the governing equations are the compressible Navier Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. A characteristic-based filtering scheme is developed to handle shocks and material discontinuities in non-ideal gases and mixtures. A TVD filter is applied as a corrector step in a predictor-corrector approach with the predictor scheme being non-dissipative and symmetric. The method is validated for canonical one dimensional flows and leading edge cavitation over a hydrofoil, and applied to study sheet to cloud cavitation over a wedge. This work is supported by the Office of Naval Research.

  10. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  11. On variations of space-heating energy use in office buildings

    International Nuclear Information System (INIS)

    Lin, Hung-Wen; Hong, Tianzhen

    2013-01-01

    Highlights: • Space heating is the largest energy end use in the U.S. building sector. • A key design and operational parameters have the most influence on space heating. • Simulated results were benchmarked against actual results to analyze discrepancies. • Yearly weather changes have significant impact on space heating energy use. • Findings enable stakeholders to make better decisions on energy efficiency. - Abstract: Space heating is the largest energy end use, consuming more than seven quintillion joules of site energy annually in the U.S. building sector. A few recent studies showed discrepancies in simulated space-heating energy use among different building energy modeling programs, and the simulated results are suspected to be underpredicting reality. While various uncertainties are associated with building simulations, especially when simulations are performed by different modelers using different simulation programs for buildings with different configurations, it is crucial to identify and evaluate key driving factors to space-heating energy use in order to support the design and operation of low-energy buildings. In this study, 10 design and operation parameters for space-heating systems of two prototypical office buildings in each of three U.S. heating climates are identified and evaluated, using building simulations with EnergyPlus, to determine the most influential parameters and their impacts on variations of space-heating energy use. The influence of annual weather change on space-heating energy is also investigated using 30-year actual weather data. The simulated space-heating energy use is further benchmarked against those from similar actual office buildings in two U.S. commercial-building databases to better understand the discrepancies between simulated and actual energy use. In summary, variations of both the simulated and actual space-heating energy use of office buildings in all three heating climates can be very large. However

  12. Large Eddy Simulation of Unstably Stratified Turbulent Flow over Urban-Like Building Arrays

    Directory of Open Access Journals (Sweden)

    Bobin Wang

    2013-01-01

    Full Text Available Thermal instability induced by solar radiation is the most common condition of urban atmosphere in daytime. Compared to researches under neutral conditions, only a few numerical works studied the unstable urban boundary layer and the effect of buoyancy force is unclear. In this paper, unstably stratified turbulent boundary layer flow over three-dimensional urban-like building arrays with ground heating is simulated. Large eddy simulation is applied to capture main turbulence structures and the effect of buoyancy force on turbulence can be investigated. Lagrangian dynamic subgrid scale model is used for complex flow together with a wall function, taking into account the large pressure gradient near buildings. The numerical model and method are verified with the results measured in wind tunnel experiment. The simulated results satisfy well with the experiment in mean velocity and temperature, as well as turbulent intensities. Mean flow structure inside canopy layer varies with thermal instability, while no large secondary vortex is observed. Turbulent intensities are enhanced, as buoyancy force contributes to the production of turbulent kinetic energy.

  13. Effects of incentives on psychosocial performances in simulated space-dwelling groups

    Science.gov (United States)

    Hienz, Robert D.; Brady, Joseph V.; Hursh, Steven R.; Gasior, Eric D.; Spence, Kevin R.; Emurian, Henry H.

    Prior research with individually isolated 3-person crews in a distributed, interactive, planetary exploration simulation examined the effects of communication constraints and crew configuration changes on crew performance and psychosocial self-report measures. The present report extends these findings to a model of performance maintenance that operationalizes conditions under which disruptive affective responses by crew participants might be anticipated to emerge. Experiments evaluated the effects of changes in incentive conditions on crew performance and self-report measures in simulated space-dwelling groups. Crews participated in a simulated planetary exploration mission that required identification, collection, and analysis of geologic samples. Results showed that crew performance effectiveness was unaffected by either positive or negative incentive conditions, while self-report measures were differentially affected—negative incentive conditions produced pronounced increases in negative self-report ratings and decreases in positive self-report ratings, while positive incentive conditions produced increased positive self-report ratings only. Thus, incentive conditions associated with simulated spaceflight missions can significantly affect psychosocial adaptation without compromising task performance effectiveness in trained and experienced crews.

  14. Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William

    2005-01-01

    As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.

  15. Advanced UVOIR Mirror Technology Development (AMTD) for Very Large Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Smith, W. Scott; Mosier, Gary; Abplanalp, Laura; Arnold, William

    2014-01-01

    ASTRO2010 Decadal stated that an advanced large-aperture ultraviolet, optical, near-infrared (UVOIR) telescope is required to enable the next generation of compelling astrophysics and exoplanet science; and, that present technology is not mature enough to affordably build and launch any potential UVOIR mission concept. AMTD builds on the state of art (SOA) defined by over 30 years of monolithic & segmented ground & space-telescope mirror technology to mature six key technologies. AMTD is deliberately pursuing multiple design paths to provide the science community with op-tions to enable either large aperture monolithic or segmented mirrors with clear engineering metrics traceable to science requirements.

  16. Large Eddy Simulation for Incompressible Flows An Introduction

    CERN Document Server

    Sagaut, P

    2005-01-01

    The first and most exhaustive work of its kind devoted entirely to the subject, Large Eddy Simulation presents a comprehensive account and a unified view of this young but very rich discipline. LES is the only efficient technique for approaching high Reynolds numbers when simulating industrial, natural or experimental configurations. The author concentrates on incompressible fluids and chooses his topics in treating with care both the mathematical ideas and their applications. The book addresses researchers as well as graduate students and engineers. The second edition was a greatly enriched version motivated both by the increasing theoretical interest in LES and the increasing number of applications. Two entirely new chapters were devoted to the coupling of LES with multiresolution multidomain techniques and to the new hybrid approaches that relate the LES procedures to the classical statistical methods based on the Reynolds-Averaged Navier-Stokes equations. This 3rd edition adds various sections to the text...

  17. Quality and Reliability of Large-Eddy Simulations II

    CERN Document Server

    Salvetti, Maria Vittoria; Meyers, Johan; Sagaut, Pierre

    2011-01-01

    The second Workshop on "Quality and Reliability of Large-Eddy Simulations", QLES2009, was held at the University of Pisa from September 9 to September 11, 2009. Its predecessor, QLES2007, was organized in 2007 in Leuven (Belgium). The focus of QLES2009 was on issues related to predicting, assessing and assuring the quality of LES. The main goal of QLES2009 was to enhance the knowledge on error sources and on their interaction in LES and to devise criteria for the prediction and optimization of simulation quality, by bringing together mathematicians, physicists and engineers and providing a platform specifically addressing these aspects for LES. Contributions were made by leading experts in the field. The present book contains the written contributions to QLES2009 and is divided into three parts, which reflect the main topics addressed at the workshop: (i) SGS modeling and discretization errors; (ii) Assessment and reduction of computational errors; (iii) Mathematical analysis and foundation for SGS modeling.

  18. Image-based Exploration of Iso-surfaces for Large Multi- Variable Datasets using Parameter Space.

    KAUST Repository

    Binyahib, Roba S.

    2013-05-13

    With an increase in processing power, more complex simulations have resulted in larger data size, with higher resolution and more variables. Many techniques have been developed to help the user to visualize and analyze data from such simulations. However, dealing with a large amount of multivariate data is challenging, time- consuming and often requires high-end clusters. Consequently, novel visualization techniques are needed to explore such data. Many users would like to visually explore their data and change certain visual aspects without the need to use special clusters or having to load a large amount of data. This is the idea behind explorable images (EI). Explorable images are a novel approach that provides limited interactive visualization without the need to re-render from the original data [40]. In this work, the concept of EI has been used to create a workflow that deals with explorable iso-surfaces for scalar fields in a multivariate, time-varying dataset. As a pre-processing step, a set of iso-values for each scalar field is inferred and extracted from a user-assisted sampling technique in time-parameter space. These iso-values are then used to generate iso- surfaces that are then pre-rendered (from a fixed viewpoint) along with additional buffers (i.e. normals, depth, values of other fields, etc.) to provide a compressed representation of iso-surfaces in the dataset. We present a tool that at run-time allows the user to interactively browse and calculate a combination of iso-surfaces superimposed on each other. The result is the same as calculating multiple iso- surfaces from the original data but without the memory and processing overhead. Our tool also allows the user to change the (scalar) values superimposed on each of the surfaces, modify their color map, and interactively re-light the surfaces. We demonstrate the effectiveness of our approach over a multi-terabyte combustion dataset. We also illustrate the efficiency and accuracy of our

  19. A simulation model for reliability evaluation of Space Station power systems

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kumar, Mudit; Wagner, H.

    1988-01-01

    A detailed simulation model for the hybrid Space Station power system is presented which allows photovoltaic and solar dynamic power sources to be mixed in varying proportions. The model considers the dependence of reliability and storage characteristics during the sun and eclipse periods, and makes it possible to model the charging and discharging of the energy storage modules in a relatively accurate manner on a continuous basis.

  20. MAGNETIC NULL POINTS IN KINETIC SIMULATIONS OF SPACE PLASMAS

    International Nuclear Information System (INIS)

    Olshevsky, Vyacheslav; Innocenti, Maria Elena; Cazzola, Emanuele; Lapenta, Giovanni; Deca, Jan; Divin, Andrey; Peng, Ivy Bo; Markidis, Stefano

    2016-01-01

    We present a systematic attempt to study magnetic null points and the associated magnetic energy conversion in kinetic particle-in-cell simulations of various plasma configurations. We address three-dimensional simulations performed with the semi-implicit kinetic electromagnetic code iPic3D in different setups: variations of a Harris current sheet, dipolar and quadrupolar magnetospheres interacting with the solar wind, and a relaxing turbulent configuration with multiple null points. Spiral nulls are more likely created in space plasmas: in all our simulations except lunar magnetic anomaly (LMA) and quadrupolar mini-magnetosphere the number of spiral nulls prevails over the number of radial nulls by a factor of 3–9. We show that often magnetic nulls do not indicate the regions of intensive energy dissipation. Energy dissipation events caused by topological bifurcations at radial nulls are rather rare and short-lived. The so-called X-lines formed by the radial nulls in the Harris current sheet and LMA simulations are rather stable and do not exhibit any energy dissipation. Energy dissipation is more powerful in the vicinity of spiral nulls enclosed by magnetic flux ropes with strong currents at their axes (their cross sections resemble 2D magnetic islands). These null lines reminiscent of Z-pinches efficiently dissipate magnetic energy due to secondary instabilities such as the two-stream or kinking instability, accompanied by changes in magnetic topology. Current enhancements accompanied by spiral nulls may signal magnetic energy conversion sites in the observational data

  1. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  2. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

    Science.gov (United States)

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-07-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

  3. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  4. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, William S. [Los Alamos National Laboratory; Bull, Jeffrey S. [Los Alamos National Laboratory; Wilcox, Trevor [Los Alamos National Laboratory; Bos, Randall J. [Los Alamos National Laboratory; Shao, Xuan-Min [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory; Costigan, Keeley R. [Los Alamos National Laboratory

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  5. A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study.

    Science.gov (United States)

    Ahmed, Rami; Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott

    2016-03-16

    OBJECTIVE : The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential. A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. CONCLUSIONS : A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent.

  6. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  7. Primary loop simulation of the SP-100 space nuclear reactor

    International Nuclear Information System (INIS)

    Borges, Eduardo M.; Braz Filho, Francisco A.; Guimaraes, Lamartine N.F.

    2011-01-01

    Between 1983 and 1992 the SP-100 space nuclear reactor development project for electric power generation in a range of 100 to 1000 kWh was conducted in the USA. Several configurations were studied to satisfy different mission objectives and power systems. In this reactor the heat is generated in a compact core and refrigerated by liquid lithium, the primary loops flow are controlled by thermoelectric electromagnetic pumps (EMTE), and thermoelectric converters produce direct current energy. To define the system operation point for an operating nominal power, it is necessary the simulation of the thermal-hydraulic components of the space nuclear reactor. In this paper the BEMTE-3 computer code is used to EMTE pump design performance evaluation to a thermalhydraulic primary loop configuration, and comparison of the system operation points of SP-100 reactor to two thermal powers, with satisfactory results. (author)

  8. Large shear deformation of particle gels studied by Brownian Dynamics simulations

    NARCIS (Netherlands)

    Rzepiela, A.A.; Opheusden, van J.H.J.; Vliet, van T.

    2004-01-01

    Brownian Dynamics (BD) simulations have been performed to study structure and rheology of particle gels under large shear deformation. The model incorporates soft spherical particles, and reversible flexible bond formation. Two different methods of shear deformation are discussed, namely affine and

  9. A large-signal dynamic simulation for the series resonant converter

    Science.gov (United States)

    King, R. J.; Stuart, T. A.

    1983-01-01

    A simple nonlinear discrete-time dynamic model for the series resonant dc-dc converter is derived using approximations appropriate to most power converters. This model is useful for the dynamic simulation of a series resonant converter using only a desktop calculator. The model is compared with a laboratory converter for a large transient event.

  10. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  11. Large eddy simulation of turbulent and stably-stratified flows

    International Nuclear Information System (INIS)

    Fallon, Benoit

    1994-01-01

    The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr

  12. Large-eddy simulation of ethanol spray combustion using a finite-rate combustion model

    Energy Technology Data Exchange (ETDEWEB)

    Li, K.; Zhou, L.X. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics; Chan, C.K. [Hong Kong Polytechnic Univ. (China). Dept. of Applied Mathematics

    2013-07-01

    Large-eddy simulation of spray combustion is under its rapid development, but the combustion models are less validated by detailed experimental data. In this paper, large-eddy simulation of ethanol-air spray combustion was made using an Eulerian-Lagrangian approach, a subgrid-scale kinetic energy stress model, and a finite-rate combustion model. The simulation results are validated in detail by experiments. The LES obtained statistically averaged temperature is in agreement with the experimental results in most regions. The instantaneous LES results show the coherent structures of the shear region near the high-temperature flame zone and the fuel vapor concentration map, indicating the droplets are concentrated in this shear region. The droplet sizes are found to be in the range of 20-100{mu}m. The instantaneous temperature map shows the close interaction between the coherent structures and the combustion reaction.

  13. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    OpenAIRE

    Sam Ali Al; Szasz Robert; Revstedt Johan

    2015-01-01

    The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...

  14. Study of coherent structures of turbulence with large wall-normal gradients in thermophysical properties using direct numerical simulation

    International Nuclear Information System (INIS)

    Reinink, Shawn K.; Yaras, Metin I.

    2015-01-01

    Forced-convection heat transfer in a heated working fluid at a thermodynamic state near its pseudocritical point is poorly predicted by correlations calibrated with data at subcritical temperatures and pressures. This is suggested to be primarily due to the influence of large wall-normal thermophysical property gradients that develop in proximity of the pseudocritical point on the concentration of coherent turbulence structures near the wall. The physical mechanisms dominating this influence remain poorly understood. In the present study, direct numerical simulation is used to study the development of coherent vortical structures within a turbulent spot under the influence of large wall-normal property gradients. A turbulent spot rather than a fully turbulent boundary layer is used for the study, for the coherent structures of turbulence in a spot tend to be in a more organized state which may allow for more effective identification of cause-and-effect relationships. Large wall-normal gradients in thermophysical properties are created by heating the working fluid which is near the pseudocritical thermodynamic state. It is found that during improved heat transfer, wall-normal gradients in density accelerate the growth of the Kelvin-Helmholtz instability mechanism in the shear layer enveloping low-speed streaks, causing it to roll up into hairpin vortices at a faster rate. It is suggested that this occurs by the baroclinic vorticity generation mechanism which accelerates the streamwise grouping of vorticity during shear layer roll-up. The increased roll-up frequency leads to reduced streamwise spacing between hairpin vortices in wave packets. The density gradients also promote the sinuous instability mode in low-speed streaks. The resulting oscillations in the streaks in the streamwise-spanwise plane lead to locally reduced spanwise spacing between hairpin vortices forming over adjacent low-speed streaks. The reduction in streamwise and spanwise spacing between

  15. Molecular dynamics simulations of sputtering of organic overlayers by slow, large clusters

    International Nuclear Information System (INIS)

    Rzeznik, L.; Czerwinski, B.; Garrison, B.J.; Winograd, N.; Postawa, Z.

    2008-01-01

    The ion-stimulated desorption of organic molecules by impact of large and slow clusters is examined using molecular dynamics (MDs) computer simulations. The investigated system, represented by a monolayer of benzene deposited on Ag{1 1 1}, is irradiated with projectiles composed of thousands of noble gas atoms having a kinetic energy of 0.1-20 eV/atom. The sputtering yield of molecular species and the kinetic energy distributions are analyzed and compared to the results obtain for PS4 overlayer. The simulations demonstrate quite clearly that the physics of ejection by large and slow clusters is distinct from the ejection events stimulated by the popular SIMS clusters, like C 60 , Au 3 and SF 5 at tens of keV energies.

  16. Discontinuous Galerkin methodology for Large-Eddy Simulations of wind turbine airfoils

    DEFF Research Database (Denmark)

    Frére, A.; Sørensen, Niels N.; Hillewaert, K.

    2016-01-01

    This paper aims at evaluating the potential of the Discontinuous Galerkin (DG) methodology for Large-Eddy Simulation (LES) of wind turbine airfoils. The DG method has shown high accuracy, excellent scalability and capacity to handle unstructured meshes. It is however not used in the wind energy...... sector yet. The present study aims at evaluating this methodology on an application which is relevant for that sector and focuses on blade section aerodynamics characterization. To be pertinent for large wind turbines, the simulations would need to be at low Mach numbers (M ≤ 0.3) where compressible...... at low and high Reynolds numbers and compares the results to state-of-the-art models used in industry, namely the panel method (XFOIL with boundary layer modeling) and Reynolds Averaged Navier-Stokes (RANS). At low Reynolds number (Re = 6 × 104), involving laminar boundary layer separation and transition...

  17. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  18. Effects of simulated space environmental parameters on six commercially available composite materials

    International Nuclear Information System (INIS)

    Funk, J.G.; Sykes, G.F. Jr.

    1989-04-01

    The effects of simulated space environmental parameters on microdamage induced by the environment in a series of commercially available graphite-fiber-reinforced composite materials were determined. Composites with both thermoset and thermoplastic resin systems were studied. Low-Earth-Orbit (LEO) exposures were simulated by thermal cycling; geosynchronous-orbit (GEO) exposures were simulated by electron irradiation plus thermal cycling. The thermal cycling temperature range was -250 F to either 200 F or 150 F. The upper limits of the thermal cycles were different to ensure that an individual composite material was not cycled above its glass transition temperature. Material response was characterized through assessment of the induced microcracking and its influence on mechanical property changes at both room temperature and -250 F. Microdamage was induced in both thermoset and thermoplastic advanced composite materials exposed to the simulated LEO environment. However, a 350 F cure single-phase toughened epoxy composite was not damaged during exposure to the LEO environment. The simuated GEO environment produced microdamage in all materials tested

  19. Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects

    Science.gov (United States)

    Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian; hide

    2015-01-01

    Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex

  20. A future large-aperture UVOIR space observatory: reference designs

    Science.gov (United States)

    Rioux, Norman; Thronson, Harley; Feinberg, Lee; Stahl, H. Philip; Redding, Dave; Jones, Andrew; Sturm, James; Collins, Christine; Liu, Alice

    2015-09-01

    Our joint NASA GSFC/JPL/MSFC/STScI study team has used community-provided science goals to derive mission needs, requirements, and candidate mission architectures for a future large-aperture, non-cryogenic UVOIR space observatory. We describe the feasibility assessment of system thermal and dynamic stability for supporting coronagraphy. The observatory is in a Sun-Earth L2 orbit providing a stable thermal environment and excellent field of regard. Reference designs include a 36-segment 9.2 m aperture telescope that stows within a five meter diameter launch vehicle fairing. Performance needs developed under the study are traceable to a variety of reference designs including options for a monolithic primary mirror.

  1. Optimal design of a composite space shield based on numerical simulations

    International Nuclear Information System (INIS)

    Son, Byung Jin; Yoo, Jeong Hoon; Lee, Min Hyung

    2015-01-01

    In this study, optimal design of a stuffed Whipple shield is proposed by using numerical simulations and new penetration criterion. The target model was selected based on the shield model used in the Columbus module of the international space station. Because experimental results can be obtained only in the low velocity region below 7 km/s, it is required to derive the Ballistic limit curve (BLC) in the high velocity region above 7 km/s by numerical simulation. AUTODYN-2D, the commercial hydro-code package, was used to simulate the nonlinear transient analysis for the hypervelocity impact. The Smoothed particle hydrodynamics (SPH) method was applied to projectile and bumper modeling to represent the debris cloud generated after the impact. Numerical simulation model and selected material properties were validated through a quantitative comparison between numerical and experimental results. A new criterion to determine whether the penetration occurs or not is proposed from kinetic energy analysis by numerical simulation in the velocity region over 7 km/s. The parameter optimization process was performed to improve the protection ability at a specific condition through the Design of experiment (DOE) method and the Response surface methodology (RSM). The performance of the proposed optimal design was numerically verified.

  2. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    Science.gov (United States)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  3. Measurement and Simulation of the Variation in Proton-Induced Energy Deposition in Large Silicon Diode Arrays

    Science.gov (United States)

    Howe, Christina L.; Weller, Robert A.; Reed, Robert A.; Sierawski, Brian D.; Marshall, Paul W.; Marshall, Cheryl J.; Mendenhall, Marcus H.; Schrimpf, Ronald D.

    2007-01-01

    The proton induced charge deposition in a well characterized silicon P-i-N focal plane array is analyzed with Monte Carlo based simulations. These simulations include all physical processes, together with pile up, to accurately describe the experimental data. Simulation results reveal important high energy events not easily detected through experiment due to low statistics. The effects of each physical mechanism on the device response is shown for a single proton energy as well as a full proton space flux.

  4. Efficient graph-based dynamic load-balancing for parallel large-scale agent-based traffic simulation

    NARCIS (Netherlands)

    Xu, Y.; Cai, W.; Aydt, H.; Lees, M.; Tolk, A.; Diallo, S.Y.; Ryzhov, I.O.; Yilmaz, L.; Buckley, S.; Miller, J.A.

    2014-01-01

    One of the issues of parallelizing large-scale agent-based traffic simulations is partitioning and load-balancing. Traffic simulations are dynamic applications where the distribution of workload in the spatial domain constantly changes. Dynamic load-balancing at run-time has shown better efficiency

  5. Experimental identification of a comb-shaped chaotic region in multiple parameter spaces simulated by the Hindmarsh—Rose neuron model

    Science.gov (United States)

    Jia, Bing

    2014-03-01

    A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.

  6. Experimental identification of a comb-shaped chaotic region in multiple parameter spaces simulated by the Hindmarsh—Rose neuron model

    International Nuclear Information System (INIS)

    Jia Bing

    2014-01-01

    A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces

  7. Major technological innovations introduced in the large antennas of the Deep Space Network

    Science.gov (United States)

    Imbriale, W. A.

    2002-01-01

    The NASA Deep Space Network (DSN) is the largest and most sensitive scientific, telecommunications and radio navigation network in the world. Its principal responsibilities are to provide communications, tracking, and science services to most of the world's spacecraft that travel beyond low Earth orbit. The network consists of three Deep Space Communications Complexes. Each of the three complexes consists of multiple large antennas equipped with ultra sensitive receiving systems. A centralized Signal Processing Center (SPC) remotely controls the antennas, generates and transmits spacecraft commands, and receives and processes the spacecraft telemetry.

  8. Space-Charge Simulation of Integrable Rapid Cycling Synchrotron

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Jeffery [Fermilab; Valishev, Alexander [Fermilab

    2017-05-01

    Integrable optics is an innovation in particle accelerator design that enables strong nonlinear focusing without generating parametric resonances. We use a Synergia space-charge simulation to investigate the application of integrable optics to a high-intensity hadron ring that could replace the Fermilab Booster. We find that incorporating integrability into the design suppresses the beam halo generated by a mismatched KV beam. Our integrable rapid cycling synchrotron (iRCS) design includes other features of modern ring design such as low momentum compaction factor and harmonically canceling sextupoles. Experimental tests of high-intensity beams in integrable lattices will take place over the next several years at the Fermilab Integrable Optics Test Accelerator (IOTA) and the University of Maryland Electron Ring (UMER).

  9. Experimental simulations of beam propagation over large distances in a compact linear Paul trap

    International Nuclear Information System (INIS)

    Gilson, Erik P.; Chung, Moses; Davidson, Ronald C.; Dorf, Mikhail; Efthimion, Philip C.; Majeski, Richard

    2006-01-01

    The Paul Trap Simulator Experiment (PTSX) is a compact laboratory experiment that places the physicist in the frame of reference of a long, charged-particle bunch coasting through a kilometers-long magnetic alternating-gradient (AG) transport system. The transverse dynamics of particles in both systems are described by similar equations, including nonlinear space-charge effects. The time-dependent voltages applied to the PTSX quadrupole electrodes are equivalent to the axially oscillating magnetic fields applied in the AG system. Experiments concerning the quiescent propagation of intense beams over large distances can then be performed in a compact and flexible facility. An understanding and characterization of the conditions required for quiescent beam transport, minimum halo particle generation, and precise beam compression and manipulation techniques, are essential, as accelerators and transport systems demand that ever-increasing amounts of space charge be transported. Application areas include ion-beam-driven high energy density physics, high energy and nuclear physics accelerator systems, etc. One-component cesium plasmas have been trapped in PTSX that correspond to normalized beam intensities, s=ω p 2 (0)/2ω q 2 , up to 80% of the space-charge limit where self-electric forces balance the applied focusing force. Here, ω p (0)=[n b (0)e b 2 /m b ε 0 ] 1/2 is the on-axis plasma frequency, and ω q is the smooth-focusing frequency associated with the applied focusing field. Plasmas in PTSX with values of s that are 20% of the limit have been trapped for times corresponding to equivalent beam propagation over 10 km. Results are presented for experiments in which the amplitude of the quadrupole focusing lattice is modified as a function of time. It is found that instantaneous changes in lattice amplitude can be detrimental to transverse confinement of the charge bunch

  10. Experimental simulations of beam propagation over large distances in a compact linear Paul trapa)

    Science.gov (United States)

    Gilson, Erik P.; Chung, Moses; Davidson, Ronald C.; Dorf, Mikhail; Efthimion, Philip C.; Majeski, Richard

    2006-05-01

    The Paul Trap Simulator Experiment (PTSX) is a compact laboratory experiment that places the physicist in the frame of reference of a long, charged-particle bunch coasting through a kilometers-long magnetic alternating-gradient (AG) transport system. The transverse dynamics of particles in both systems are described by similar equations, including nonlinear space-charge effects. The time-dependent voltages applied to the PTSX quadrupole electrodes are equivalent to the axially oscillating magnetic fields applied in the AG system. Experiments concerning the quiescent propagation of intense beams over large distances can then be performed in a compact and flexible facility. An understanding and characterization of the conditions required for quiescent beam transport, minimum halo particle generation, and precise beam compression and manipulation techniques, are essential, as accelerators and transport systems demand that ever-increasing amounts of space charge be transported. Application areas include ion-beam-driven high energy density physics, high energy and nuclear physics accelerator systems, etc. One-component cesium plasmas have been trapped in PTSX that correspond to normalized beam intensities, ŝ=ωp2(0)/2ωq2, up to 80% of the space-charge limit where self-electric forces balance the applied focusing force. Here, ωp(0)=[nb(0)eb2/mbɛ0]1/2 is the on-axis plasma frequency, and ωq is the smooth-focusing frequency associated with the applied focusing field. Plasmas in PTSX with values of ŝ that are 20% of the limit have been trapped for times corresponding to equivalent beam propagation over 10km. Results are presented for experiments in which the amplitude of the quadrupole focusing lattice is modified as a function of time. It is found that instantaneous changes in lattice amplitude can be detrimental to transverse confinement of the charge bunch.

  11. Implementation of a Large Eddy Simulation Method Applied to Recirculating Flow in a Ventilated Room

    DEFF Research Database (Denmark)

    Davidson, Lars

    In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation.......In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation....

  12. Human spaceflight and space adaptations: Computational simulation of gravitational unloading on the spine

    Science.gov (United States)

    Townsend, Molly T.; Sarigul-Klijn, Nesrin

    2018-04-01

    Living in reduced gravitational environments for a prolonged duration such, as a fly by mission to Mars or an extended stay at the international space station, affects the human body - in particular, the spine. As the spine adapts to spaceflight, morphological and physiological changes cause the mechanical integrity of the spinal column to be compromised, potentially endangering internal organs, nervous health, and human body mechanical function. Therefore, a high fidelity computational model and simulation of the whole human spine was created and validated for the purpose of investigating the mechanical integrity of the spine in crew members during exploratory space missions. A spaceflight exposed spine has been developed through the adaptation of a three-dimensional nonlinear finite element model with the updated Lagrangian formulation of a healthy ground-based human spine in vivo. Simulation of the porohyperelastic response of the intervertebral disc to mechanical unloading resulted in a model capable of accurately predicting spinal swelling/lengthening, spinal motion, and internal stress distribution. The curvature of this space adaptation exposed spine model was compared to a control terrestrial-based finite element model, indicating how the shape changed. Finally, the potential of injury sites to crew members are predicted for a typical 9 day mission.

  13. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    International Nuclear Information System (INIS)

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; TechX Corp.; Fermilab

    2008-01-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES)

  14. Design and optimization of large accelerator systems through high-fidelity electromagnetic simulations

    International Nuclear Information System (INIS)

    Ng, C; Akcelik, V; Candel, A; Chen, S; Ge, L; Kabel, A; Lee, Lie-Quan; Li, Z; Prudencio, E; Schussman, G; Uplenchwar, R; Xiao, L; Ko, K; Austin, T; Cary, J R; Ovtchinnikov, S; Smith, D N; Werner, G R; Bellantoni, L

    2008-01-01

    SciDAC-1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC Centers and Insitutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider and the Large Hadron Collider in high energy physics, the JLab 12-GeV Upgrade in nuclear physics, and the Spallation Neutron Source and the Linac Coherent Light Source in basic energy sciences

  15. Using the Large Fire Simulator System to map wildland fire potential for the conterminous United States

    Science.gov (United States)

    LaWen Hollingsworth; James Menakis

    2010-01-01

    This project mapped wildland fire potential (WFP) for the conterminous United States by using the large fire simulation system developed for Fire Program Analysis (FPA) System. The large fire simulation system, referred to here as LFSim, consists of modules for weather generation, fire occurrence, fire suppression, and fire growth modeling. Weather was generated with...

  16. Thermal/vacuum measurements of the Herschel space telescope by close-range photogrammetry

    Science.gov (United States)

    Parian, J. Amiri; Cozzani, A.; Appolloni, M.; Casarosa, G.

    2017-11-01

    In the frame of the development of a videogrammetric system to be used in thermal vacuum chambers at the European Space Research and Technology Centre (ESTEC) and other sites across Europe, the design of a network using micro-cameras was specified by the European Space agency (ESA)-ESTEC. The selected test set-up is the photogrammetric test of the Herschel Satellite Flight Model in the ESTEC Large Space Simulator. The photogrammetric system will be used to verify the Herschel Telescope alignment and Telescope positioning with respect to the Cryostat Vacuum Vessel (CVV) inside the Large Space Simulator during Thermal-Vacuum/Thermal-Balance test phases. We designed a close-range photogrammetric network by heuristic simulation and a videogrammetric system with an overall accuracy of 1:100,000. A semi-automated image acquisition system, which is able to work at low temperatures (-170°C) in order to acquire images according to the designed network has been constructed by ESA-ESTEC. In this paper we will present the videogrammetric system and sub-systems and the results of real measurements with a representative setup similar to the set-up of Herschel spacecraft which was realized in ESTEC Test Centre.

  17. Virtual Reality Simulation of the International Space Welding Experiment

    Science.gov (United States)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.

  18. Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems

    Science.gov (United States)

    Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel

    Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.

  19. Complete k-space visualization of x-ray photoelectron diffraction

    International Nuclear Information System (INIS)

    Denlinger, J.D.; Lawrence Berkeley Lab., CA; Rotenberg, E.; Lawrence Berkeley Lab., CA; Kevan, S.D.; Tonner, B.P.

    1996-01-01

    A highly detailed x-ray photoelectron diffraction data set has been acquired for crystalline Cu(001). The data set for bulk Cu 3p emission encompasses a large k-space volume (k = 3--10 angstrom -1 ) with sufficient energy and angular sampling to monitor the continuous variation of diffraction intensities. The evolution of back-scattered intensity oscillations is visualized by energy and angular slices of this volume data set. Large diffraction data sets such as this will provide rigorous experimental tests of real-space reconstruction algorithms and multiple-scattering simulations

  20. Large-eddy simulation of swirling pulverized-coal combustion

    Energy Technology Data Exchange (ETDEWEB)

    Hu, L.Y.; Luo, Y.H. [Shanghai Jiaotong Univ. (China). School of Mechanical Engineering; Zhou, L.X.; Xu, C.S. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics

    2013-07-01

    A Eulerian-Lagrangian large-eddy simulation (LES) with a Smagorinsky-Lilly sub-grid scale stress model, presumed-PDF fast chemistry and EBU gas combustion models, particle devolatilization and particle combustion models are used to study the turbulence and flame structures of swirling pulverized-coal combustion. The LES statistical results are validated by the measurement results. The instantaneous LES results show that the coherent structures for pulverized coal combustion is stronger than that for swirling gas combustion. The particles are concentrated in the periphery of the coherent structures. The flame is located at the high vorticity and high particle concentration zone.

  1. One-Way Nested Large-Eddy Simulation over the Askervein Hill

    Directory of Open Access Journals (Sweden)

    James D. Doyle

    2009-07-01

    Full Text Available Large-eddy simulation (LES models have been used extensively to study atmospheric boundary layer turbulence over flat surfaces; however, LES applications over topography are less common. We evaluate the ability of an existing model – COAMPS®-LES – to simulate flow over terrain using data from the Askervein Hill Project. A new approach is suggested for the treatment of the lateral boundaries using one-way grid nesting. LES wind profile and speed-up are compared with observations at various locations around the hill. The COAMPS-LES model performs generally well. This case could serve as a useful benchmark for evaluating LES models for applications over topography.

  2. Analyzing Damping Vibration Methods of Large-Size Space Vehicles in the Earth's Magnetic Field

    Directory of Open Access Journals (Sweden)

    G. A. Shcheglov

    2016-01-01

    Full Text Available It is known that most of today's space vehicles comprise large antennas, which are bracket-attached to the vehicle body. Dimensions of reflector antennas may be of 30 ... 50 m. The weight of such constructions can reach approximately 200 kg.Since the antenna dimensions are significantly larger than the size of the vehicle body and the points to attach the brackets to the space vehicles have a low stiffness, conventional dampers may be inefficient. The paper proposes to consider the damping antenna in terms of its interaction with the Earth's magnetic field.A simple dynamic model of the space vehicle equipped with a large-size structure is built. The space vehicle is a parallelepiped to which the antenna is attached through a beam.To solve the model problems, was used a simplified model of Earth's magnetic field: uniform, with intensity lines parallel to each other and perpendicular to the plane of the antenna.The paper considers two layouts of coils with respect to the antenna, namely: a vertical one in which an axis of magnetic dipole is perpendicular to the antenna plane, and a horizontal layout in which an axis of magnetic dipole lies in the antenna plane. It also explores two ways for magnetic damping of oscillations: through the controlled current that is supplied from the power supply system of the space vehicle, and by the self-induction current in the coil. Thus, four objectives were formulated.In each task was formulated an oscillation equation. Then a ratio of oscillation amplitudes and their decay time were estimated. It was found that each task requires the certain parameters either of the antenna itself, its dimensions and moment of inertia, or of the coil and, respectively, the current, which is supplied from the space vehicle. In each task for these parameters were found the ranges, which allow us to tell of efficient damping vibrations.The conclusion can be drawn based on the analysis of tasks that a specialized control system

  3. OECD/NEZ Main Steam Line Break Benchmark Problem Exercise I Simulation Using the SPACE Code with the Point Kinetics Model

    International Nuclear Information System (INIS)

    Kim, Yohan; Kim, Seyun; Ha, Sangjun

    2014-01-01

    The Safety and Performance Analysis Code for Nuclear Power Plants (SPACE) has been developed in recent years by the Korea Nuclear Hydro and Nuclear Power Co. (KHNP) through collaborative works with other Korean nuclear industries. The SPACE is a best-estimated two-phase three-field thermal-hydraulic analysis code to analyze the safety and performance of pressurized water reactors (PWRs). The SPACE code has sufficient features to replace outdated vendor supplied codes and to be used for the safety analysis of operating PWRs and the design of advanced reactors. As a result of the second phase of the development, the 2.14 version of the code was released through the successive various V and V works. The topical reports on the code and related safety analysis methodologies have been prepared for license works. In this study, the OECD/NEA Main Steam Line Break (MSLB) Benchmark Problem Exercise I was simulated as a V and V work. The results were compared with those of the participants in the benchmark project. The OECD/NEA MSLB Benchmark Problem Exercise I was simulated using the SPACE code. The results were compared with those of the participants in the benchmark project. Through the simulation, it was concluded that the SPACE code can effectively simulate PWR MSLB accidents

  4. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  5. A web-based virtual lighting simulator

    Energy Technology Data Exchange (ETDEWEB)

    Papamichael, Konstantinos; Lai, Judy; Fuller, Daniel; Tariq, Tara

    2002-05-06

    This paper is about a web-based ''virtual lighting simulator,'' which is intended to allow architects and lighting designers to quickly assess the effect of key parameters on the daylighting and lighting performance in various space types. The virtual lighting simulator consists of a web-based interface that allows navigation through a large database of images and data, which were generated through parametric lighting simulations. At its current form, the virtual lighting simulator has two main modules, one for daylighting and one for electric lighting. The daylighting module includes images and data for a small office space, varying most key daylighting parameters, such as window size and orientation, glazing type, surface reflectance, sky conditions, time of the year, etc. The electric lighting module includes images and data for five space types (classroom, small office, large open office, warehouse and small retail), varying key lighting parameters, such as the electric lighting system, surface reflectance, dimming/switching, etc. The computed images include perspectives and plans and are displayed in various formats to support qualitative as well as quantitative assessment. The quantitative information is in the form of iso-contour lines superimposed on the images, as well as false color images and statistical information on work plane illuminance. The qualitative information includes images that are adjusted to account for the sensitivity and adaptation of the human eye. The paper also includes a section on the major technical issues and their resolution.

  6. Large eddy simulation of Loss of Vacuum Accident in STARDUST facility

    International Nuclear Information System (INIS)

    Benedetti, Miriam; Gaudio, Pasquale; Lupelli, Ivan; Malizia, Andrea; Porfiri, Maria Teresa; Richetta, Maria

    2013-01-01

    Highlights: ► Fusion safety, plasma material interaction. ► Numerical and experimental data comparison to analyze the consequences of Loss of Vacuum Accident that can provoke dust mobilization inside the Vacuum Vessel of the Nuclear Fusion Reactor ITER-like. -- Abstract: The development of computational fluid dynamic (CFD) models of air ingress into the vacuum vessel (VV) represents an important issue concerning the safety analysis of nuclear fusion devices, in particular in the field of dust mobilization. The present work deals with the large eddy simulations (LES) of fluid dynamic fields during a vessel filling at near vacuum conditions to support the safety study of Loss of Vacuum Accidents (LOVA) events triggered by air income. The model's results are compared to the experimental data provided by STARDUST facility at different pressurization rates (100 Pa/s, 300 Pa/s and 500 Pa/s). Simulation's results compare favorably with experimental data, demonstrating the possibility of implementing LES in large vacuum systems as tokamaks

  7. Unified Simulation and Analysis Framework for Deep Space Navigation Design

    Science.gov (United States)

    Anzalone, Evan; Chuang, Jason; Olsen, Carrie

    2013-01-01

    As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.

  8. Simulations of an Offshore Wind Farm Using Large-Eddy Simulation and a Torque-Controlled Actuator Disc Model

    Science.gov (United States)

    Creech, Angus; Früh, Wolf-Gerrit; Maguire, A. Eoghan

    2015-05-01

    We present here a computational fluid dynamics (CFD) simulation of Lillgrund offshore wind farm, which is located in the Øresund Strait between Sweden and Denmark. The simulation combines a dynamic representation of wind turbines embedded within a large-eddy simulation CFD solver and uses hr-adaptive meshing to increase or decrease mesh resolution where required. This allows the resolution of both large-scale flow structures around the wind farm, and the local flow conditions at individual turbines; consequently, the response of each turbine to local conditions can be modelled, as well as the resulting evolution of the turbine wakes. This paper provides a detailed description of the turbine model which simulates the interaction between the wind, the turbine rotors, and the turbine generators by calculating the forces on the rotor, the body forces on the air, and instantaneous power output. This model was used to investigate a selection of key wind speeds and directions, investigating cases where a row of turbines would be fully aligned with the wind or at specific angles to the wind. Results shown here include presentations of the spin-up of turbines, the observation of eddies moving through the turbine array, meandering turbine wakes, and an extensive wind farm wake several kilometres in length. The key measurement available for cross-validation with operational wind farm data is the power output from the individual turbines, where the effect of unsteady turbine wakes on the performance of downstream turbines was a main point of interest. The results from the simulations were compared to the performance measurements from the real wind farm to provide a firm quantitative validation of this methodology. Having achieved good agreement between the model results and actual wind farm measurements, the potential of the methodology to provide a tool for further investigations of engineering and atmospheric science problems is outlined.

  9. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  10. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    Science.gov (United States)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally

  11. Robust mode space approach for atomistic modeling of realistically large nanowire transistors

    Science.gov (United States)

    Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard

    2018-01-01

    Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.

  12. PIC Simulations of Velocity-space Instabilities in a Decreasing Magnetic Field: Viscosity and Thermal Conduction

    Science.gov (United States)

    Riquelme, Mario; Quataert, Eliot; Verscharen, Daniel

    2018-02-01

    We use particle-in-cell (PIC) simulations of a collisionless, electron–ion plasma with a decreasing background magnetic field, {\\boldsymbol{B}}, to study the effect of velocity-space instabilities on the viscous heating and thermal conduction of the plasma. If | {\\boldsymbol{B}}| decreases, the adiabatic invariance of the magnetic moment gives rise to pressure anisotropies with {p}| | ,j> {p}\\perp ,j ({p}| | ,j and {p}\\perp ,j represent the pressure of species j (electron or ion) parallel and perpendicular to B ). Linear theory indicates that, for sufficiently large anisotropies, different velocity-space instabilities can be triggered. These instabilities in principle have the ability to pitch-angle scatter the particles, limiting the growth of the anisotropies. Our simulations focus on the nonlinear, saturated regime of the instabilities. This is done through the permanent decrease of | {\\boldsymbol{B}}| by an imposed plasma shear. We show that, in the regime 2≲ {β }j≲ 20 ({β }j\\equiv 8π {p}j/| {\\boldsymbol{B}}{| }2), the saturated ion and electron pressure anisotropies are controlled by the combined effect of the oblique ion firehose and the fast magnetosonic/whistler instabilities. These instabilities grow preferentially on the scale of the ion Larmor radius, and make {{Δ }}{p}e/{p}| | ,e≈ {{Δ }}{p}i/{p}| | ,i (where {{Δ }}{p}j={p}\\perp ,j-{p}| | ,j). We also quantify the thermal conduction of the plasma by directly calculating the mean free path of electrons, {λ }e, along the mean magnetic field, finding that {λ }e depends strongly on whether | {\\boldsymbol{B}}| decreases or increases. Our results can be applied in studies of low-collisionality plasmas such as the solar wind, the intracluster medium, and some accretion disks around black holes.

  13. Large-scale atomistic simulations of nanostructured materials based on divide-and-conquer density functional theory

    Directory of Open Access Journals (Sweden)

    Vashishta P.

    2011-05-01

    Full Text Available A linear-scaling algorithm based on a divide-and-conquer (DC scheme is designed to perform large-scale molecular-dynamics simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT. This scheme is applied to the thermite reaction at an Al/Fe2O3 interface. It is found that mass diffusion and reaction rate at the interface are enhanced by a concerted metal-oxygen flip mechanism. Preliminary simulations are carried out for an aluminum particle in water based on the conventional DFT, as a target system for large-scale DC-DFT simulations. A pair of Lewis acid and base sites on the aluminum surface preferentially catalyzes hydrogen production in a low activation-barrier mechanism found in the simulations

  14. Advanced Mirror Technology Development for Very Large Space Telescopes

    Science.gov (United States)

    Stahl, H. P.

    2014-01-01

    Advanced Mirror Technology Development (AMTD) is a NASA Strategic Astrophysics Technology project to mature to TRL-6 the critical technologies needed to produce 4-m or larger flight-qualified UVOIR mirrors by 2018 so that a viable mission can be considered by the 2020 Decadal Review. The developed mirror technology must enable missions capable of both general astrophysics & ultra-high contrast observations of exoplanets. Just as JWST’s architecture was driven by launch vehicle, a future UVOIR mission’s architectures (monolithic, segmented or interferometric) will depend on capacities of future launch vehicles (and budget). Since we cannot predict the future, we must prepare for all potential futures. Therefore, to provide the science community with options, we are pursuing multiple technology paths. AMTD uses a science-driven systems engineering approach. We derived engineering specifications for potential future monolithic or segmented space telescopes based on science needs and implement constraints. And we are maturing six inter-linked critical technologies to enable potential future large aperture UVOIR space telescope: 1) Large-Aperture, Low Areal Density, High Stiffness Mirrors, 2) Support Systems, 3) Mid/High Spatial Frequency Figure Error, 4) Segment Edges, 5) Segment-to-Segment Gap Phasing, and 6) Integrated Model Validation Science Advisory Team and a Systems Engineering Team. We are maturing all six technologies simultaneously because all are required to make a primary mirror assembly (PMA); and, it is the PMA’s on-orbit performance which determines science return. PMA stiffness depends on substrate and support stiffness. Ability to cost-effectively eliminate mid/high spatial figure errors and polishing edges depends on substrate stiffness. On-orbit thermal and mechanical performance depends on substrate stiffness, the coefficient of thermal expansion (CTE) and thermal mass. And, segment-to-segment phasing depends on substrate & structure stiffness

  15. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Hoshi, T [Department of Applied Mathematics and Physics, Tottori University, Tottori 680-8550 (Japan); Fujiwara, T [Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST-JST) (Japan)

    2009-02-11

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  16. Automation and Robotics for Space-Based Systems, 1991

    Science.gov (United States)

    Williams, Robert L., II (Editor)

    1992-01-01

    The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.

  17. A large-eddy simulation based power estimation capability for wind farms over complex terrain

    Science.gov (United States)

    Senocak, I.; Sandusky, M.; Deleon, R.

    2017-12-01

    There has been an increasing interest in predicting wind fields over complex terrain at the micro-scale for resource assessment, turbine siting, and power forecasting. These capabilities are made possible by advancements in computational speed from a new generation of computing hardware, numerical methods and physics modelling. The micro-scale wind prediction model presented in this work is based on the large-eddy simulation paradigm with surface-stress parameterization. The complex terrain is represented using an immersed-boundary method that takes into account the parameterization of the surface stresses. Governing equations of incompressible fluid flow are solved using a projection method with second-order accurate schemes in space and time. We use actuator disk models with rotation to simulate the influence of turbines on the wind field. Data regarding power production from individual turbines are mostly restricted because of proprietary nature of the wind energy business. Most studies report percentage drop of power relative to power from the first row. There have been different approaches to predict power production. Some studies simply report available wind power in the upstream, some studies estimate power production using power curves available from turbine manufacturers, and some studies estimate power as torque multiplied by rotational speed. In the present work, we propose a black-box approach that considers a control volume around a turbine and estimate the power extracted from the turbine based on the conservation of energy principle. We applied our wind power prediction capability to wind farms over flat terrain such as the wind farm over Mower County, Minnesota and the Horns Rev offshore wind farm in Denmark. The results from these simulations are in good agreement with published data. We also estimate power production from a hypothetical wind farm in complex terrain region and identify potential zones suitable for wind power production.

  18. Program NAJOCSC and space charge effect simulation in C01

    International Nuclear Information System (INIS)

    Tang, J.Y.; Chabert, A.; Baron, E.

    1999-01-01

    During the beam tests of the THI project at GANIL, it was found it difficult to increase the beam power above 2 kW at CSS2 extraction. The space charge effect (abbreviated as S.C. effect) in cyclotrons is suspected to play some role in the phenomenon, especially the longitudinal S.C. one and also the coupling between longitudinal and radial motions. The injector cyclotron C01 is studied, and the role played by the S.C. effect in this cyclotron in the THI case is investigated by a simulation method. (K.A.)

  19. Simulations of space charge neutralization in a magnetized electron cooler

    Energy Technology Data Exchange (ETDEWEB)

    Gerity, James [Texas A-M; McIntyre, Peter M. [Texas A-M; Bruhwiler, David Leslie [RadiaSoft, Boulder; Hall, Christopher [RadiaSoft, Boulder; Moens, Vince Jan [Ecole Polytechnique, Lausanne; Park, Chong Shik [Fermilab; Stancari, Giulio [Fermilab

    2017-02-02

    Magnetized electron cooling at relativistic energies and Ampere scale current is essential to achieve the proposed ion luminosities in a future electron-ion collider (EIC). Neutralization of the space charge in such a cooler can significantly increase the magnetized dynamic friction and, hence, the cooling rate. The Warp framework is being used to simulate magnetized electron beam dynamics during and after the build-up of neutralizing ions, via ionization of residual gas in the cooler. The design follows previous experiments at Fermilab as a verification case. We also discuss the relevance to EIC designs.

  20. Visual Interfaces for Parallel Simulations (VIPS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Configuring the 3D geometry and physics of large scale parallel physics simulations is increasingly complex. Given the investment in time and effort to run these...

  1. Contamination Control Assessment of the World's Largest Space Environment Simulation Chamber

    Science.gov (United States)

    Snyder, Aaron; Henry, Michael W.; Grisnik, Stanley P.; Sinclair, Stephen M.

    2012-01-01

    The Space Power Facility s thermal vacuum test chamber is the largest chamber in the world capable of providing an environment for space simulation. To improve performance and meet stringent requirements of a wide customer base, significant modifications were made to the vacuum chamber. These include major changes to the vacuum system and numerous enhancements to the chamber s unique polar crane, with a goal of providing high cleanliness levels. The significance of these changes and modifications are discussed in this paper. In addition, the composition and arrangement of the pumping system and its impact on molecular back-streaming are discussed in detail. Molecular contamination measurements obtained with a TQCM and witness wafers during two recent integrated system tests of the chamber are presented and discussed. Finally, a concluding remarks section is presented.

  2. Unique Programme of Indian Centre for Space Physics using large rubber Balloons

    Science.gov (United States)

    Chakrabarti, Sandip Kumar; Sarkar, Ritabrata; Bhowmick, Debashis; Chakraborty, Subhankar

    Indian Centre for Space Physics (ICSP) has developed a unique capability to pursue space based studies at a very low cost. Here, large rubber balloons are sent to near space (~ 40km) with payloads of less than 4kg weight. These payloads can be cosmic ray detectors, X-ray detectors, muon detectors apart from communication device, GPS, and nine degrees of freedom measuring capabilities. With two balloons in orbiter-launcher configuration, ICSP has been able to conduct long duration flights upto 12 hours. ICSP has so far sent 56 Dignity missions to near space and obtained Cosmic Ray and muon variation on a regular basis, dynamical spectrum of solar flares and gamma ray burst apart from other usual parameters such as wind velocity components, temperature and pressure variations etc. Since all the payloads are retrieved by parachutes, the cost per mission remains very low, typically around USD1000.00. The preparation time is low. Furthermore, no special launching area is required. In principle, such experiments can be conducted on a daily basis, if need be. Presently, we are also incorporating studies relating to earth system science such as Ozone, aerosols, micro-meteorites etc.

  3. Effects of turbine spacing on the power output of extended wind-farms

    NARCIS (Netherlands)

    Stevens, Richard Johannes Antonius Maria; Gayme, Dennice F.; Meneveau, Charles

    2016-01-01

    We present results from large eddy simulations of extended wind-farms for several turbine configurations with a range of different spanwise and streamwise spacing combinations. The results show that for wind-farms arranged in a staggered configuration with spanwise spacings in the range ≈[3.5,8]D,

  4. Understanding Large-scale Structure in the SSA22 Protocluster Region Using Cosmological Simulations

    Science.gov (United States)

    Topping, Michael W.; Shapley, Alice E.; Steidel, Charles C.; Naoz, Smadar; Primack, Joel R.

    2018-01-01

    We investigate the nature and evolution of large-scale structure within the SSA22 protocluster region at z = 3.09 using cosmological simulations. A redshift histogram constructed from current spectroscopic observations of the SSA22 protocluster reveals two separate peaks at z = 3.065 (blue) and z = 3.095 (red). Based on these data, we report updated overdensity and mass calculations for the SSA22 protocluster. We find {δ }b,{gal}=4.8+/- 1.8 and {δ }r,{gal}=9.5+/- 2.0 for the blue and red peaks, respectively, and {δ }t,{gal}=7.6+/- 1.4 for the entire region. These overdensities correspond to masses of {M}b=(0.76+/- 0.17)× {10}15{h}-1 {M}ȯ , {M}r=(2.15+/- 0.32)× {10}15{h}-1 {M}ȯ , and {M}t=(3.19+/- 0.40)× {10}15{h}-1 {M}ȯ for the red, blue, and total peaks, respectively. We use the Small MultiDark Planck (SMDPL) simulation to identify comparably massive z∼ 3 protoclusters, and uncover the underlying structure and ultimate fate of the SSA22 protocluster. For this analysis, we construct mock redshift histograms for each simulated z∼ 3 protocluster, quantitatively comparing them with the observed SSA22 data. We find that the observed double-peaked structure in the SSA22 redshift histogram corresponds not to a single coalescing cluster, but rather the proximity of a ∼ {10}15{h}-1 {M}ȯ protocluster and at least one > {10}14{h}-1 {M}ȯ cluster progenitor. Such associations in the SMDPL simulation are easily understood within the framework of hierarchical clustering of dark matter halos. We finally find that the opportunity to observe such a phenomenon is incredibly rare, with an occurrence rate of 7.4{h}3 {{{Gpc}}}-3. Based on data obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration, and was made possible by the generous financial support of the W.M. Keck Foundation.

  5. Experiments and Large-Eddy Simulations of acoustically forced bluff-body flows

    Energy Technology Data Exchange (ETDEWEB)

    Ayache, S.; Dawson, J.R.; Triantafyllidis, A. [Department of Engineering, University of Cambridge (United Kingdom); Balachandran, R. [Department of Mechanical Engineering, University College London (United Kingdom); Mastorakos, E., E-mail: em257@eng.cam.ac.u [Department of Engineering, University of Cambridge (United Kingdom)

    2010-10-15

    The isothermal air flow behind an enclosed axisymmetric bluff body, with the incoming flow being forced by a loudspeaker at a single frequency and with large amplitude, has been explored with high data-rate Laser-Doppler Anemometry measurements and Large-Eddy Simulations. The comparison between experiment and simulations allows a quantification of the accuracy of LES for turbulent flows with periodicity and the results provide insights into the structure of flows relevant to combustors undergoing self-excited oscillations. At low forcing frequencies, the whole flow pulsates with the incoming flow, although at a phase lag that depends on spatial location. At high forcing frequencies, vortices are shed from the bluff body and the recirculation zone, as a whole, pulsates less. Despite the fact that the incoming flow has an oscillation that is virtually monochromatic, the velocity spectra show peaks at various harmonics, whose relative magnitudes vary with location. A sub-harmonic peak is also observed inside the recirculation zone possibly caused by merging of the shed vortices. The phase-averaged turbulent fluctuations show large temporal and spatial variations. The LES reproduces reasonably accurately the experimental findings in terms of phase-averaged mean and r.m.s. velocities, vortex formation, and spectral peaks.

  6. Experiments and Large-Eddy Simulations of acoustically forced bluff-body flows

    International Nuclear Information System (INIS)

    Ayache, S.; Dawson, J.R.; Triantafyllidis, A.; Balachandran, R.; Mastorakos, E.

    2010-01-01

    The isothermal air flow behind an enclosed axisymmetric bluff body, with the incoming flow being forced by a loudspeaker at a single frequency and with large amplitude, has been explored with high data-rate Laser-Doppler Anemometry measurements and Large-Eddy Simulations. The comparison between experiment and simulations allows a quantification of the accuracy of LES for turbulent flows with periodicity and the results provide insights into the structure of flows relevant to combustors undergoing self-excited oscillations. At low forcing frequencies, the whole flow pulsates with the incoming flow, although at a phase lag that depends on spatial location. At high forcing frequencies, vortices are shed from the bluff body and the recirculation zone, as a whole, pulsates less. Despite the fact that the incoming flow has an oscillation that is virtually monochromatic, the velocity spectra show peaks at various harmonics, whose relative magnitudes vary with location. A sub-harmonic peak is also observed inside the recirculation zone possibly caused by merging of the shed vortices. The phase-averaged turbulent fluctuations show large temporal and spatial variations. The LES reproduces reasonably accurately the experimental findings in terms of phase-averaged mean and r.m.s. velocities, vortex formation, and spectral peaks.

  7. Virtual Environment User Interfaces to Support RLV and Space Station Simulations in the ANVIL Virtual Reality Lab

    Science.gov (United States)

    Dumas, Joseph D., II

    1998-01-01

    Several virtual reality I/O peripherals were successfully configured and integrated as part of the author's 1997 Summer Faculty Fellowship work. These devices, which were not supported by the developers of VR software packages, use new software drivers and configuration files developed by the author to allow them to be used with simulations developed using those software packages. The successful integration of these devices has added significant capability to the ANVIL lab at MSFC. In addition, the author was able to complete the integration of a networked virtual reality simulation of the Space Shuttle Remote Manipulator System docking Space Station modules which was begun as part of his 1996 Fellowship. The successful integration of this simulation demonstrates the feasibility of using VR technology for ground-based training as well as on-orbit operations.

  8. Robotic Design Choice Overview using Co-simulation and Design Space Exploration

    DEFF Research Database (Denmark)

    Christiansen, Martin Peter; Larsen, Peter Gorm; Nyholm Jørgensen, Rasmus

    2015-01-01

    . Simulations are used to evaluate the robot model output response in relation to operational demands. An example of a load carrying challenge in relation to the feeding robot is presented and a design space is defined with candidate solutions in both the mechanical and software domains. Simulation results......Rapid robotic system development has created a demand for multi-disciplinary methods and tools to explore and compare design alternatives. In this paper, we present a collaborative modelling technique that combines discrete-event models of controller software with continuous-time models of physical...... robot components. The proposed co-modelling method utilises Vienna Development Method (VDM) and Matlab for discrete-event modelling and 20-sim for continuous-time modelling. The model-based development of a mobile robot mink feeding system is used to illustrate the collaborative modelling method...

  9. Deep Space Storm Shelter Simulation Study

    Science.gov (United States)

    Dugan, Kathryn; Phojanamongkolkij, Nipa; Cerro, Jeffrey; Simon, Matthew

    2015-01-01

    Missions outside of Earth's magnetic field are impeded by the presence of radiation from galactic cosmic rays and solar particle events. To overcome this issue, NASA's Advanced Exploration Systems Radiation Works Storm Shelter (RadWorks) has been studying different radiation protective habitats to shield against the onset of solar particle event radiation. These habitats have the capability of protecting occupants by utilizing available materials such as food, water, brine, human waste, trash, and non-consumables to build short-term shelters. Protection comes from building a barrier with the materials that dampens the impact of the radiation on astronauts. The goal of this study is to develop a discrete event simulation, modeling a solar particle event and the building of a protective shelter. The main hallway location within a larger habitat similar to the International Space Station (ISS) is analyzed. The outputs from this model are: 1) the total area covered on the shelter by the different materials, 2) the amount of radiation the crew members receive, and 3) the amount of time for setting up the habitat during specific points in a mission given an event occurs.

  10. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  11. Proceedings of joint meeting of the 6th simulation science symposium and the NIFS collaboration research 'large scale computer simulation'

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-03-01

    Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)

  12. Understanding the microscopic moisture migration in pore space using DEM simulation

    Directory of Open Access Journals (Sweden)

    Yuan Guo

    2015-04-01

    Full Text Available The deformation of soil skeleton and migration of pore fluid are the major factors relevant to the triggering of and damages by liquefaction. The influence of pore fluid migration during earthquake has been demonstrated from recent model experiments and field case studies. Most of the current liquefaction assessment models are based on testing of isotropic liquefiable materials. However the recent New Zealand earthquake shows much severer damages than those predicted by existing models. A fundamental cause has been contributed to the embedded layers of low permeability silts. The existence of these silt layers inhibits water migration under seismic loads, which accelerated liquefaction and caused a much larger settlement than that predicted by existing theories. This study intends to understand the process of moisture migration in the pore space of sand using discrete element method (DEM simulation. Simulations were conducted on consolidated undrained triaxial testing of sand where a cylinder sample of sand was built and subjected to a constant confining pressure and axial loading. The porosity distribution was monitored during the axial loading process. The spatial distribution of porosity change was determined, which had a direct relationship with the distribution of excess pore water pressure. The non-uniform distribution of excess pore water pressure causes moisture migration. From this, the migration of pore water during the loading process can be estimated. The results of DEM simulation show a few important observations: (1 External forces are mainly carried and transmitted by the particle chains of the soil sample; (2 Porosity distribution during loading is not uniform due to non-homogeneous soil fabric (i.e. the initial particle arrangement and existence of particle chains; (3 Excess pore water pressure develops differently at different loading stages. At the early stage of loading, zones with a high initial porosity feature higher

  13. Space Charge Mitigation by Hollow Bunches

    CERN Multimedia

    Oeftiger, AO

    2014-01-01

    To satisfy the requirements of the HL-LHC (High Luminosity Large Hadron Collider), the LHC injector chain will need to supply a higher brightness, i.e. deliver the same transverse beam emittances \\epsilon_{x,y} while providing a higher intensity N. However, a larger number of particles per bunch enhances space charge effects. One approach to mitigate the impact of space charge is to change the longitudinal phase space distribution: hollow bunches feature a depleted bunch centre and a densely populated periphery. Thus, the spatial line density maximum is depressed which ultimately decreases the tune spread imposed by space charge. Therefore, a higher intensity can be accepted while keeping the same overall space charge tune shift. 3 different methods to create hollow bunches in the PSBooster are simulated.

  14. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    Science.gov (United States)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  15. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  16. Large eddy simulation of soot evolution in an aircraft combustor

    Science.gov (United States)

    Mueller, Michael E.; Pitsch, Heinz

    2013-11-01

    An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel

  17. SU-F-T-84: Measurement and Monte-Carlo Simulation of Electron Phase Spaces Using a Wide Angle Magnetic Electron Spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Englbrecht, F; Lindner, F; Bin, J; Wislsperger, A; Reiner, M; Kamp, F; Belka, C; Dedes, G; Schreiber, J; Parodi, K [LMU Munich, Munich, Bavaria (Germany)

    2016-06-15

    Purpose: To measure and simulate well-defined electron spectra using a linear accelerator and a permanent-magnetic wide-angle spectrometer to test the performance of a novel reconstruction algorithm for retrieval of unknown electron-sources, in view of application to diagnostics of laser-driven particle acceleration. Methods: Six electron energies (6, 9, 12, 15, 18 and 21 MeV, 40cm × 40cm field-size) delivered by a Siemens Oncor linear accelerator were recorded using a permanent-magnetic wide-angle electron spectrometer (150mT) with a one dimensional slit (0.2mm × 5cm). Two dimensional maps representing beam-energy and entrance-position along the slit were measured using different scintillating screens, read by an online CMOS detector of high resolution (0.048mm × 0.048mm pixels) and large field of view (5cm × 10cm). Measured energy-slit position maps were compared to forward FLUKA simulations of electron transport through the spectrometer, starting from IAEA phase-spaces of the accelerator. The latter ones were validated against measured depth-dose and lateral profiles in water. Agreement of forward simulation and measurement was quantified in terms of position and shape of the signal distribution on the detector. Results: Measured depth-dose distributions and lateral profiles in the water phantom showed good agreement with forward simulations of IAEA phase-spaces, thus supporting usage of this simulation source in the study. Measured energy-slit position maps and those obtained by forward Monte-Carlo simulations showed satisfactory agreement in shape and position. Conclusion: Well-defined electron beams of known energy and shape will provide an ideal scenario to study the performance of a novel reconstruction algorithm using measured and simulated signal. Future work will increase the stability and convergence of the reconstruction-algorithm for unknown electron sources, towards final application to the electrons which drive the interaction of TW-class laser

  18. Comparative proteomic analysis of rice after seed ground simulated radiation and spaceflight explains the radiation effects of space environment

    Science.gov (United States)

    Wang, Wei; Shi, Jinming; Liang, Shujian; Lei, Huang; Shenyi, Zhang; Sun, Yeqing

    In previous work, we compared the proteomic profiles of rice plants growing after seed space-flights with ground controls by two-dimensional difference gel electrophoresis (2-D DIGE) and found that the protein expression profiles were changed after seed space environment exposures. Spaceflight represents a complex environmental condition in which several interacting factors such as cosmic radiation, microgravity and space magnetic fields are involved. Rice seed is in the process of dormant of plant development, showing high resistance against stresses, so the highly ionizing radiation (HZE) in space is considered as main factor causing biological effects to seeds. To further investigate the radiation effects of space environment, we performed on-ground simulated HZE particle radiation and compared between the proteomes of seed irra-diated plants and seed spaceflight (20th recoverable satellite) plants from the same rice variety. Space ionization shows low-dose but high energy particle effects, for searching the particle effects, ground radiations with the same low-dose (2mGy) but different liner energy transfer (LET) values (13.3KeV/µm-C, 30KeV/µm-C, 31KeV/µm-Ne, 62.2KeV/µm-C, 500Kev/µm-Fe) were performed; using 2-D DIGE coupled with clustering and principle component analysis (PCA) for data process and comparison, we found that the holistic protein expression patterns of plants irradiated by LET-62.2KeV/µm carbon particles were most similar to spaceflight. In addition, although space environment presents a low-dose radiation (0.177 mGy/day on the satellite), the equivalent simulated radiation dose effects should still be evaluated: radiations of LET-62.2KeV/µm carbon particles with different cumulative doses (2mGy, 20mGy, 200mGy, 2000mGy) were further carried out and resulted that the 2mGy radiation still shared most similar proteomic profiles with spaceflight, confirming the low-dose effects of space radiation. Therefore, in the protein expression level

  19. Simulation of the preliminary General Electric SP-100 space reactor concept using the ATHENA computer code

    International Nuclear Information System (INIS)

    Fletcher, C.D.

    1986-01-01

    The capability to perform thermal-hydraulic analyses of a space reactor using the ATHENA computer code is demonstrated. The fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of the preliminary General electric SP-100 design were modeled with ATHENA. Two demonstration transient calculations were performed simulating accident conditions. Calculated results are available for display using the Nuclear Plant Analyzer color graphics analysis tool in addition to traditional plots. ATHENA-calculated results appear reasonable, both for steady state full power conditions, and for the two transients. This analysis represents the first known transient thermal-hydraulic simulation using an integral space reactor system model incorporating heat pipes. 6 refs., 17 figs., 1 tab

  20. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    Science.gov (United States)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not

  1. Numerical Simulation and Optimization of Hole Spacing for Cement Grouting in Rocks

    Directory of Open Access Journals (Sweden)

    Ping Fu

    2013-01-01

    Full Text Available The fine fissures of V-diabase were the main stratigraphic that affected the effectiveness of foundation grout curtain in Dagang Mountain Hydropower Station. Thus, specialized in situ grouting tests were conducted to determine reasonable hole spacing and other parameters. Considering time variation of the rheological parameters of grout, variation of grouting pressure gradient, and evolution law of the fracture opening, numerical simulations were performed on the diffusion process of cement grouting in the fissures of the rock mass. The distribution of permeability after grouting was obtained on the basis of analysis results, and the grouting hole spacing was discussed based on the reliability analysis. A probability of optimization along with a finer optimization precision as 0.1 m could be adopted when compared with the accuracy of 0.5 m that is commonly used. The results could provide a useful reference for choosing reasonable grouting hole spacing in similar projects.

  2. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    Science.gov (United States)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  3. A change of coordinates on the large phase space of quantum cohomology

    International Nuclear Information System (INIS)

    Kabanov, A.

    2001-01-01

    The Gromov-Witten invariants of a smooth, projective variety V, when twisted by the tautological classes on the moduli space of stable maps, give rise to a family of cohomological field theories and endow the base of the family with coordinates. We prove that the potential functions associated to the tautological ψ classes (the large phase space) and the κ classes are related by a change of coordinates which generalizes a change of basis on the ring of symmetric functions. Our result is a generalization of the work of Manin-Zograf who studied the case where V is a point. We utilize this change of variables to derive the topological recursion relations associated to the κ classes from those associated to the ψ classes. (orig.)

  4. A research on the excavation, support, and environment control of large scale underground space

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Pil Chong; Kwon, Kwang Soo; Jeong, So Keul [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    With the growing necessity of the underground space due to the deficiency of above-ground space, the size and shape of underground structures tend to be complex and diverse. This complexity and variety force the development of new techniques for rock mass classification, excavation and supporting of underground space, monitoring and control of underground environment. All these techniques should be applied together to make the underground space comfortable. To achieve this, efforts have been made on 5 different areas; research on the underground space design and stability analysis, research on the techniques for excavation of rock by controlled blasting, research on the development of monitoring system to forecast the rock behaviour of underground space, research on the environment inspection system in closed space, and research on dynamic analysis of the airflow and environmental control in the large geos-spaces. The 5 main achievements are improvement of the existing structure analysis program(EXCRACK) to consider the deformation and failure characteristics of rock joints, development of new blasting design (SK-cut), prediction of ground vibration through the newly proposed wave propagation equation, development and In-Situ application of rock mass deformation monitoring system and data acquisition software, and trial manufacture of the environment inspection system in closed space. Should these techniques be applied to the development of underground space, prevention of industrial disaster, cut down of construction cost, domestication of monitoring system, improvement of tunnel stability, curtailment of royalty, upgrade of domestic technologies will be brought forth. (Abstract Truncated)

  5. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Science.gov (United States)

    Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim

    2017-06-01

    Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water

  6. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Directory of Open Access Journals (Sweden)

    T. Wolf-Grosse

    2017-06-01

    Full Text Available Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s−1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES experiments with the Parallelised Large-Eddy Simulation Model (PALM for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a

  7. Simulation of the Plasma Meniscus with and without Space Charge using Triode Extraction System

    International Nuclear Information System (INIS)

    Abdel Rahman, M.M.; EI-Khabeary, H.

    2007-01-01

    In this work simulation of the singly charged argon ion trajectories for a variable plasma meniscus is studied with and without space charge for the triode extraction system by using SIMION 3D (Simulation of Ion Optics in Three Dimensions) version 7 personal computer program. Tbe influence of acceleration voltage applied to tbe acceleration electrode of the triode extraction system on the shape of the plasma meniscus has been determined. The plasma electrode is set at +5000 volt and the acceleration voltage applied to the acceleration electrode is varied from -5000 volt to +5000 volt. In the most of the concave and convex plasma shapes ion beam emittance can be calculated by using separate standard deviations of positions and elevations angles. Ion beam emittance as a function of the curvature of the plasma meniscus for different plasma shapes ( flat concave and convex ) without space change at acceleration voltage varied from -5000 volt to +5000 volt applied to the acceleration electrode of the triode extraction system has been investigated. Tbe influence of the extraction gap on ion beam emittance for a plasma concave shape of 3.75 mm without space charge at acceleration voltage, V a cc = -2000 volt applied to the acceleration electrode of the triode extraction system has been determined. Also the influence of space charge on ion beam emittance for variable plasma meniscus at acceleration voltage, V a cc = - 2000 volt applied to the acceleration electrode of. the triode extraction system has been studied

  8. Simulation of the plasma meniscus with and without space charge using triode extraction system

    International Nuclear Information System (INIS)

    Rahman, M.M.Abdel; El-Khabeary, H.

    2009-01-01

    In this work, simulation of the singly charged argon ion trajectories for a variable plasma meniscus is studied with and without space charge for the triode extraction system by using SIMION 3D (Simulation of Ion Optics in Three Dimensions) version 7 personal computer program. The influence of acceleration voltage applied to the acceleration electrode of the triode extraction system on the shape of the plasma meniscus has been determined. The plasma electrode is set at +5000 volt and the acceleration voltage applied to the acceleration electrode is varied from -5000 volt to +5000 volt. In the most of the concave and convex plasma shapes, ion beam emittance can be calculated by using separate standard deviations of positions and elevations angles. Ion beam emittance as a function of the curvature of the plasma meniscus for different plasma shapes ( flat, concave and convex ) without space charge at acceleration voltage varied from -5000 volt to +5000 volt applied to the acceleration electrode of the triode extraction system has been investigated. The influence of the extraction gap on ion beam emittance for a plasma concave shape of 3.75 mm without space charge at acceleration voltage, V acc = -2000 volt applied to the acceleration electrode of the triode extraction system has been determined. Also the influence of space charge on ion beam emittance for variable plasma meniscus at acceleration voltage, V acc = -2000 volt applied to the acceleration electrode of the triode extraction system has been studied. (author)

  9. ATLAS Physicist in Space

    CERN Multimedia

    Bengt Lund-Jensen

    2007-01-01

    On December 9, the former ATLAS physicist Christer Fuglesang was launched into space onboard the STS-116 Space Shuttle flight from Kennedy Space Center in Florida. Christer worked on the development of the accordion-type liquid argon calorimeter and SUSY simulations in what eventually became ATLAS until summer 1992 when he became one out of six astronaut trainees with the European Space Agency (ESA). His selection out of a very large number of applicants from all over the ESA member states involved a number of tests in order to choose the most suitable candidates. As ESA astronaut Christer trained with the Russian Soyuz programme in Star City outside of Moscow from 1993 until 1996, when he moved to Houston to train for space shuttle missions with NASA. Christer belonged to the backup crew for the Euromir95 mission. After additional training in Russia, Christer qualified as ‘Soyuz return commander’ in 1998. Christer rerouting cables during his second space walk. (Photo: courtesy NASA) During...

  10. Reservoir Modeling by Data Integration via Intermediate Spaces and Artificial Intelligence Tools in MPS Simulation Frameworks

    International Nuclear Information System (INIS)

    Ahmadi, Rouhollah; Khamehchi, Ehsan

    2013-01-01

    Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks and fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data

  11. Reservoir Modeling by Data Integration via Intermediate Spaces and Artificial Intelligence Tools in MPS Simulation Frameworks

    Energy Technology Data Exchange (ETDEWEB)

    Ahmadi, Rouhollah, E-mail: rouhollahahmadi@yahoo.com [Amirkabir University of Technology, PhD Student at Reservoir Engineering, Department of Petroleum Engineering (Iran, Islamic Republic of); Khamehchi, Ehsan [Amirkabir University of Technology, Faculty of Petroleum Engineering (Iran, Islamic Republic of)

    2013-12-15

    Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks and fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data.

  12. Establishment of DNS database in a turbulent channel flow by large-scale simulations

    OpenAIRE

    Abe, Hiroyuki; Kawamura, Hiroshi; 阿部 浩幸; 河村 洋

    2008-01-01

    In the present study, we establish statistical DNS (Direct Numerical Simulation) database in a turbulent channel flow with passive scalar transport at high Reynolds numbers and make the data available at our web site (http://murasun.me.noda.tus.ac.jp/turbulence/). The established database is reported together with the implementation of large-scale simulations, representative DNS results and results on turbulence model testing using the DNS data.

  13. Anatomically detailed and large-scale simulations studying synapse loss and synchrony using NeuroBox

    Directory of Open Access Journals (Sweden)

    Markus eBreit

    2016-02-01

    Full Text Available The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic or reconstruction to the simulation platform UG 4 (which harbors a neuroscientific portfolio and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g. new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations.

  14. Simulation test of PIUS-type reactor with large scale experimental apparatus

    International Nuclear Information System (INIS)

    Tamaki, M.; Tsuji, Y.; Ito, T.; Tasaka, K.; Kukita, Yutaka

    1995-01-01

    A large scale experimental apparatus for simulating the PIUS-type reactor has been constructed keeping the volumetric scaling ratio to the realistic reactor model. Fundamental experiments such as a steady state operation and a pump trip simulation were performed. Experimental results were compared with those obtained by the small scale apparatus in JAERI. We have already reported the effectiveness of the feedback control for the primary loop pump speed (PI control) for the stable operation. In this paper this feedback system is modified and the PID control is introduced. This new system worked well for the operation of the PIUS-type reactor even in a rapid transient condition. (author)

  15. A Piezoelectric Unimorph Deformable Mirror Concept by Wafer Transfer for Ultra Large Space Telescopes

    Science.gov (United States)

    Yang, Eui-Hyeok; Shcheglov, Kirill

    2002-01-01

    Future concepts of ultra large space telescopes include segmented silicon mirrors and inflatable polymer mirrors. Primary mirrors for these systems cannot meet optical surface figure requirements and are likely to generate over several microns of wavefront errors. In order to correct for these large wavefront errors, high stroke optical quality deformable mirrors are required. JPL has recently developed a new technology for transferring an entire wafer-level mirror membrane from one substrate to another. A thin membrane, 100 mm in diameter, has been successfully transferred without using adhesives or polymers. The measured peak-to-valley surface error of a transferred and patterned membrane (1 mm x 1 mm x 0.016 mm) is only 9 nm. The mirror element actuation principle is based on a piezoelectric unimorph. A voltage applied to the piezoelectric layer induces stress in the longitudinal direction causing the film to deform and pull on the mirror connected to it. The advantage of this approach is that the small longitudinal strains obtainable from a piezoelectric material at modest voltages are thus translated into large vertical displacements. Modeling is performed for a unimorph membrane consisting of clamped rectangular membrane with a PZT layer with variable dimensions. The membrane transfer technology is combined with the piezoelectric bimorph actuator concept to constitute a compact deformable mirror device with a large stroke actuation of a continuous mirror membrane, resulting in a compact A0 systems for use in ultra large space telescopes.

  16. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    NARCIS (Netherlands)

    Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike

  17. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  18. Large Eddy Simulation of Supercritical CO2 Through Bend Pipes

    Science.gov (United States)

    He, Xiaoliang; Apte, Sourabh; Dogan, Omer

    2017-11-01

    Supercritical Carbon Dioxide (sCO2) is investigated as working fluid for power generation in thermal solar, fossil energy and nuclear power plants at high pressures. Severe erosion has been observed in the sCO2 test loops, particularly in nozzles, turbine blades and pipe bends. It is hypothesized that complex flow features such as flow separation and property variations may lead to large oscillations in the wall shear stresses and result in material erosion. In this work, large eddy simulations are conducted at different Reynolds numbers (5000, 27,000 and 50,000) to investigate the effect of heat transfer in a 90 degree bend pipe with unit radius of curvature in order to identify the potential causes of the erosion. The simulation is first performed without heat transfer to validate the flow solver against available experimental and computational studies. Mean flow statistics, turbulent kinetic energy, shear stresses and wall force spectra are computed and compared with available experimental data. Formation of counter-rotating vortices, named Dean vortices, are observed. Secondary flow pattern and swirling-switching flow motions are identified and visualized. Effects of heat transfer on these flow phenomena are then investigated by applying a constant heat flux at the wall. DOE Fossil Energy Crosscutting Technology Research Program.

  19. Surgical Space Suits Increase Particle and Microbiological Emission Rates in a Simulated Surgical Environment.

    Science.gov (United States)

    Vijaysegaran, Praveen; Knibbs, Luke D; Morawska, Lidia; Crawford, Ross W

    2018-05-01

    The role of space suits in the prevention of orthopedic prosthetic joint infection remains unclear. Recent evidence suggests that space suits may in fact contribute to increased infection rates, with bioaerosol emissions from space suits identified as a potential cause. This study aimed to compare the particle and microbiological emission rates (PER and MER) of space suits and standard surgical clothing. A comparison of emission rates between space suits and standard surgical clothing was performed in a simulated surgical environment during 5 separate experiments. Particle counts were analyzed with 2 separate particle counters capable of detecting particles between 0.1 and 20 μm. An Andersen impactor was used to sample bacteria, with culture counts performed at 24 and 48 hours. Four experiments consistently showed statistically significant increases in both PER and MER when space suits are used compared with standard surgical clothing. One experiment showed inconsistent results, with a trend toward increases in both PER and MER when space suits are used compared with standard surgical clothing. Space suits cause increased PER and MER compared with standard surgical clothing. This finding provides mechanistic evidence to support the increased prosthetic joint infection rates observed in clinical studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Development of Multi-Physics Dynamics Models for High-Frequency Large-Amplitude Structural Response Simulation

    Science.gov (United States)

    Derkevorkian, Armen; Peterson, Lee; Kolaini, Ali R.; Hendricks, Terry J.; Nesmith, Bill J.

    2016-01-01

    An analytic approach is demonstrated to reveal potential pyroshock -driven dynamic effects causing power losses in the Thermo -Electric (TE) module bars of the Mars Science Laboratory (MSL) Multi -Mission Radioisotope Thermoelectric Generator (MMRTG). This study utilizes high- fidelity finite element analysis with SIERRA/PRESTO codes to estimate wave propagation effects due to large -amplitude suddenly -applied pyro shock loads in the MMRTG. A high fidelity model of the TE module bar was created with approximately 30 million degrees -of-freedom (DOF). First, a quasi -static preload was applied on top of the TE module bar, then transient tri- axial acceleration inputs were simultaneously applied on the preloaded module. The applied input acceleration signals were measured during MMRTG shock qualification tests performed at the Jet Propulsion Laboratory. An explicit finite element solver in the SIERRA/PRESTO computational environment, along with a 3000 processor parallel super -computing framework at NASA -AMES, was used for the simulation. The simulation results were investigated both qualitatively and quantitatively. The predicted shock wave propagation results provide detailed structural responses throughout the TE module bar, and key insights into the dynamic response (i.e., loads, displacements, accelerations) of critical internal spring/piston compression systems, TE materials, and internal component interfaces in the MMRTG TE module bar. They also provide confidence on the viability of this high -fidelity modeling scheme to accurately predict shock wave propagation patterns within complex structures. This analytic approach is envisioned for modeling shock sensitive hardware susceptible to intense shock environments positioned near shock separation devices in modern space vehicles and systems.